00:00:00.001 Started by upstream project "autotest-nightly" build number 4131 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3493 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.067 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.098 Fetching changes from the remote Git repository 00:00:00.102 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.917 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.930 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.942 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:05.942 > git config core.sparsecheckout # timeout=10 00:00:05.955 > git read-tree -mu HEAD # timeout=10 00:00:05.972 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:05.991 Commit message: "kid: add issue 3541" 00:00:05.991 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:06.115 [Pipeline] Start of Pipeline 00:00:06.128 [Pipeline] library 00:00:06.130 Loading library shm_lib@master 00:00:06.130 Library shm_lib@master is cached. Copying from home. 00:00:06.144 [Pipeline] node 00:00:06.156 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.157 [Pipeline] { 00:00:06.165 [Pipeline] catchError 00:00:06.166 [Pipeline] { 00:00:06.175 [Pipeline] wrap 00:00:06.181 [Pipeline] { 00:00:06.186 [Pipeline] stage 00:00:06.188 [Pipeline] { (Prologue) 00:00:06.198 [Pipeline] echo 00:00:06.199 Node: VM-host-SM38 00:00:06.203 [Pipeline] cleanWs 00:00:06.213 [WS-CLEANUP] Deleting project workspace... 00:00:06.213 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.220 [WS-CLEANUP] done 00:00:06.457 [Pipeline] setCustomBuildProperty 00:00:06.546 [Pipeline] httpRequest 00:00:06.928 [Pipeline] echo 00:00:06.929 Sorcerer 10.211.164.101 is alive 00:00:06.935 [Pipeline] retry 00:00:06.936 [Pipeline] { 00:00:06.947 [Pipeline] httpRequest 00:00:06.952 HttpMethod: GET 00:00:06.953 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:06.953 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:06.964 Response Code: HTTP/1.1 200 OK 00:00:06.964 Success: Status code 200 is in the accepted range: 200,404 00:00:06.965 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:09.605 [Pipeline] } 00:00:09.621 [Pipeline] // retry 00:00:09.628 [Pipeline] sh 00:00:09.913 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:09.927 [Pipeline] httpRequest 00:00:10.275 [Pipeline] echo 00:00:10.276 Sorcerer 10.211.164.101 is alive 00:00:10.281 [Pipeline] retry 00:00:10.283 [Pipeline] { 00:00:10.368 [Pipeline] httpRequest 00:00:10.372 HttpMethod: GET 00:00:10.373 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:10.373 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:10.402 Response Code: HTTP/1.1 200 OK 00:00:10.403 Success: Status code 200 is in the accepted range: 200,404 00:00:10.403 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:28.380 [Pipeline] } 00:01:28.398 [Pipeline] // retry 00:01:28.407 [Pipeline] sh 00:01:28.692 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:31.251 [Pipeline] sh 00:01:31.534 + git -C spdk log --oneline -n5 00:01:31.534 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:31.534 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:31.534 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:31.534 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:31.534 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:31.558 [Pipeline] writeFile 00:01:31.575 [Pipeline] sh 00:01:31.861 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.876 [Pipeline] sh 00:01:32.239 + cat autorun-spdk.conf 00:01:32.239 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.239 SPDK_TEST_NVME=1 00:01:32.239 SPDK_TEST_FTL=1 00:01:32.239 SPDK_TEST_ISAL=1 00:01:32.239 SPDK_RUN_ASAN=1 00:01:32.239 SPDK_RUN_UBSAN=1 00:01:32.239 SPDK_TEST_XNVME=1 00:01:32.239 SPDK_TEST_NVME_FDP=1 00:01:32.239 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.248 RUN_NIGHTLY=1 00:01:32.250 [Pipeline] } 00:01:32.266 [Pipeline] // stage 00:01:32.282 [Pipeline] stage 00:01:32.284 [Pipeline] { (Run VM) 00:01:32.297 [Pipeline] sh 00:01:32.586 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:32.586 + echo 'Start stage prepare_nvme.sh' 00:01:32.586 Start stage prepare_nvme.sh 00:01:32.586 + [[ -n 6 ]] 00:01:32.586 + disk_prefix=ex6 00:01:32.586 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:32.586 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:32.586 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:32.586 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.586 ++ SPDK_TEST_NVME=1 00:01:32.586 ++ SPDK_TEST_FTL=1 00:01:32.586 ++ SPDK_TEST_ISAL=1 00:01:32.586 ++ SPDK_RUN_ASAN=1 00:01:32.586 ++ SPDK_RUN_UBSAN=1 00:01:32.586 ++ SPDK_TEST_XNVME=1 00:01:32.586 ++ SPDK_TEST_NVME_FDP=1 00:01:32.586 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.586 ++ RUN_NIGHTLY=1 00:01:32.586 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:32.586 + nvme_files=() 00:01:32.586 + declare -A nvme_files 00:01:32.586 + backend_dir=/var/lib/libvirt/images/backends 00:01:32.586 + nvme_files['nvme.img']=5G 00:01:32.586 + nvme_files['nvme-cmb.img']=5G 00:01:32.586 + nvme_files['nvme-multi0.img']=4G 00:01:32.586 + nvme_files['nvme-multi1.img']=4G 00:01:32.586 + nvme_files['nvme-multi2.img']=4G 00:01:32.586 + nvme_files['nvme-openstack.img']=8G 00:01:32.586 + nvme_files['nvme-zns.img']=5G 00:01:32.586 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:32.586 + (( SPDK_TEST_FTL == 1 )) 00:01:32.586 + nvme_files["nvme-ftl.img"]=6G 00:01:32.586 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:32.586 + nvme_files["nvme-fdp.img"]=1G 00:01:32.586 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:32.586 + for nvme in "${!nvme_files[@]}" 00:01:32.586 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:32.845 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.845 + for nvme in "${!nvme_files[@]}" 00:01:32.845 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:33.779 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:33.779 + for nvme in "${!nvme_files[@]}" 00:01:33.779 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:33.779 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:33.779 + for nvme in "${!nvme_files[@]}" 00:01:33.779 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:33.779 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:33.779 + for nvme in "${!nvme_files[@]}" 00:01:33.779 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:34.038 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:34.038 + for nvme in "${!nvme_files[@]}" 00:01:34.038 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:34.299 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.299 + for nvme in "${!nvme_files[@]}" 00:01:34.299 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:34.869 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.869 + for nvme in "${!nvme_files[@]}" 00:01:34.869 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:35.129 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:35.129 + for nvme in "${!nvme_files[@]}" 00:01:35.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:36.085 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.085 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:36.085 + echo 'End stage prepare_nvme.sh' 00:01:36.085 End stage prepare_nvme.sh 00:01:36.100 [Pipeline] sh 00:01:36.390 + DISTRO=fedora39 00:01:36.390 + CPUS=10 00:01:36.390 + RAM=12288 00:01:36.390 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:36.390 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:36.390 00:01:36.390 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:36.390 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:36.390 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:36.390 HELP=0 00:01:36.390 DRY_RUN=0 00:01:36.390 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:36.390 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:36.390 NVME_AUTO_CREATE=0 00:01:36.390 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:36.390 NVME_CMB=,,,, 00:01:36.390 NVME_PMR=,,,, 00:01:36.390 NVME_ZNS=,,,, 00:01:36.390 NVME_MS=true,,,, 00:01:36.390 NVME_FDP=,,,on, 00:01:36.390 SPDK_VAGRANT_DISTRO=fedora39 00:01:36.390 SPDK_VAGRANT_VMCPU=10 00:01:36.390 SPDK_VAGRANT_VMRAM=12288 00:01:36.390 SPDK_VAGRANT_PROVIDER=libvirt 00:01:36.390 SPDK_VAGRANT_HTTP_PROXY= 00:01:36.390 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:36.390 SPDK_OPENSTACK_NETWORK=0 00:01:36.390 VAGRANT_PACKAGE_BOX=0 00:01:36.390 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:36.390 FORCE_DISTRO=true 00:01:36.390 VAGRANT_BOX_VERSION= 00:01:36.390 EXTRA_VAGRANTFILES= 00:01:36.390 NIC_MODEL=e1000 00:01:36.390 00:01:36.390 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:36.390 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:38.936 Bringing machine 'default' up with 'libvirt' provider... 00:01:39.505 ==> default: Creating image (snapshot of base box volume). 00:01:39.505 ==> default: Creating domain with the following settings... 00:01:39.505 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727645578_76d6914da47ad3d3fb08 00:01:39.505 ==> default: -- Domain type: kvm 00:01:39.506 ==> default: -- Cpus: 10 00:01:39.506 ==> default: -- Feature: acpi 00:01:39.506 ==> default: -- Feature: apic 00:01:39.506 ==> default: -- Feature: pae 00:01:39.506 ==> default: -- Memory: 12288M 00:01:39.506 ==> default: -- Memory Backing: hugepages: 00:01:39.506 ==> default: -- Management MAC: 00:01:39.506 ==> default: -- Loader: 00:01:39.506 ==> default: -- Nvram: 00:01:39.506 ==> default: -- Base box: spdk/fedora39 00:01:39.506 ==> default: -- Storage pool: default 00:01:39.506 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727645578_76d6914da47ad3d3fb08.img (20G) 00:01:39.506 ==> default: -- Volume Cache: default 00:01:39.506 ==> default: -- Kernel: 00:01:39.506 ==> default: -- Initrd: 00:01:39.506 ==> default: -- Graphics Type: vnc 00:01:39.506 ==> default: -- Graphics Port: -1 00:01:39.506 ==> default: -- Graphics IP: 127.0.0.1 00:01:39.506 ==> default: -- Graphics Password: Not defined 00:01:39.506 ==> default: -- Video Type: cirrus 00:01:39.506 ==> default: -- Video VRAM: 9216 00:01:39.506 ==> default: -- Sound Type: 00:01:39.506 ==> default: -- Keymap: en-us 00:01:39.506 ==> default: -- TPM Path: 00:01:39.506 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:39.506 ==> default: -- Command line args: 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:39.506 ==> default: -> value=-drive, 00:01:39.506 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:39.506 ==> default: -> value=-drive, 00:01:39.506 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:39.506 ==> default: -> value=-drive, 00:01:39.506 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.506 ==> default: -> value=-drive, 00:01:39.506 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.506 ==> default: -> value=-drive, 00:01:39.506 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:39.506 ==> default: -> value=-drive, 00:01:39.506 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:39.506 ==> default: -> value=-device, 00:01:39.506 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.765 ==> default: Creating shared folders metadata... 00:01:39.765 ==> default: Starting domain. 00:01:41.725 ==> default: Waiting for domain to get an IP address... 00:02:03.685 ==> default: Waiting for SSH to become available... 00:02:03.685 ==> default: Configuring and enabling network interfaces... 00:02:05.584 default: SSH address: 192.168.121.66:22 00:02:05.584 default: SSH username: vagrant 00:02:05.584 default: SSH auth method: private key 00:02:07.483 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:14.074 ==> default: Mounting SSHFS shared folder... 00:02:15.974 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.974 ==> default: Checking Mount.. 00:02:16.909 ==> default: Folder Successfully Mounted! 00:02:16.909 00:02:16.909 SUCCESS! 00:02:16.909 00:02:16.909 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:16.909 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:16.909 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:16.909 00:02:16.918 [Pipeline] } 00:02:16.934 [Pipeline] // stage 00:02:16.943 [Pipeline] dir 00:02:16.944 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:16.946 [Pipeline] { 00:02:16.959 [Pipeline] catchError 00:02:16.961 [Pipeline] { 00:02:16.974 [Pipeline] sh 00:02:17.253 + vagrant ssh-config --host vagrant 00:02:17.253 + sed -ne '/^Host/,$p' 00:02:17.253 + tee ssh_conf 00:02:19.780 Host vagrant 00:02:19.780 HostName 192.168.121.66 00:02:19.780 User vagrant 00:02:19.780 Port 22 00:02:19.780 UserKnownHostsFile /dev/null 00:02:19.780 StrictHostKeyChecking no 00:02:19.780 PasswordAuthentication no 00:02:19.780 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:19.780 IdentitiesOnly yes 00:02:19.780 LogLevel FATAL 00:02:19.780 ForwardAgent yes 00:02:19.780 ForwardX11 yes 00:02:19.780 00:02:19.792 [Pipeline] withEnv 00:02:19.794 [Pipeline] { 00:02:19.807 [Pipeline] sh 00:02:20.082 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:20.082 source /etc/os-release 00:02:20.082 [[ -e /image.version ]] && img=$(< /image.version) 00:02:20.082 # Minimal, systemd-like check. 00:02:20.082 if [[ -e /.dockerenv ]]; then 00:02:20.082 # Clear garbage from the node'\''s name: 00:02:20.082 # agt-er_autotest_547-896 -> autotest_547-896 00:02:20.082 # $HOSTNAME is the actual container id 00:02:20.082 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:20.082 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:20.082 # We can assume this is a mount from a host where container is running, 00:02:20.082 # so fetch its hostname to easily identify the target swarm worker. 00:02:20.082 container="$(< /etc/hostname) ($agent)" 00:02:20.082 else 00:02:20.082 # Fallback 00:02:20.082 container=$agent 00:02:20.082 fi 00:02:20.082 fi 00:02:20.082 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:20.082 ' 00:02:20.092 [Pipeline] } 00:02:20.106 [Pipeline] // withEnv 00:02:20.114 [Pipeline] setCustomBuildProperty 00:02:20.128 [Pipeline] stage 00:02:20.131 [Pipeline] { (Tests) 00:02:20.149 [Pipeline] sh 00:02:20.428 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:20.440 [Pipeline] sh 00:02:20.722 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:20.791 [Pipeline] timeout 00:02:20.791 Timeout set to expire in 50 min 00:02:20.792 [Pipeline] { 00:02:20.804 [Pipeline] sh 00:02:21.084 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:21.650 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:21.660 [Pipeline] sh 00:02:21.936 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:22.205 [Pipeline] sh 00:02:22.484 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:22.498 [Pipeline] sh 00:02:22.775 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:22.775 ++ readlink -f spdk_repo 00:02:22.775 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:22.775 + [[ -n /home/vagrant/spdk_repo ]] 00:02:22.775 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:22.775 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:22.775 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:22.775 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:22.775 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:22.775 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:22.775 + cd /home/vagrant/spdk_repo 00:02:22.775 + source /etc/os-release 00:02:22.775 ++ NAME='Fedora Linux' 00:02:22.775 ++ VERSION='39 (Cloud Edition)' 00:02:22.775 ++ ID=fedora 00:02:22.775 ++ VERSION_ID=39 00:02:22.775 ++ VERSION_CODENAME= 00:02:22.775 ++ PLATFORM_ID=platform:f39 00:02:22.775 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:22.775 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:22.775 ++ LOGO=fedora-logo-icon 00:02:22.775 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:22.775 ++ HOME_URL=https://fedoraproject.org/ 00:02:22.775 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:22.775 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:22.775 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:22.775 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:22.775 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:22.775 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:22.775 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:22.775 ++ SUPPORT_END=2024-11-12 00:02:22.775 ++ VARIANT='Cloud Edition' 00:02:22.775 ++ VARIANT_ID=cloud 00:02:22.775 + uname -a 00:02:22.775 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:22.775 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:23.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:23.600 Hugepages 00:02:23.600 node hugesize free / total 00:02:23.600 node0 1048576kB 0 / 0 00:02:23.600 node0 2048kB 0 / 0 00:02:23.600 00:02:23.600 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:23.600 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:23.600 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:23.600 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:23.600 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:23.600 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:23.600 + rm -f /tmp/spdk-ld-path 00:02:23.600 + source autorun-spdk.conf 00:02:23.600 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.600 ++ SPDK_TEST_NVME=1 00:02:23.600 ++ SPDK_TEST_FTL=1 00:02:23.600 ++ SPDK_TEST_ISAL=1 00:02:23.600 ++ SPDK_RUN_ASAN=1 00:02:23.600 ++ SPDK_RUN_UBSAN=1 00:02:23.600 ++ SPDK_TEST_XNVME=1 00:02:23.600 ++ SPDK_TEST_NVME_FDP=1 00:02:23.600 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.600 ++ RUN_NIGHTLY=1 00:02:23.600 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:23.600 + [[ -n '' ]] 00:02:23.600 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:23.600 + for M in /var/spdk/build-*-manifest.txt 00:02:23.600 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:23.600 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.600 + for M in /var/spdk/build-*-manifest.txt 00:02:23.600 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:23.600 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.600 + for M in /var/spdk/build-*-manifest.txt 00:02:23.600 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:23.600 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:23.600 ++ uname 00:02:23.600 + [[ Linux == \L\i\n\u\x ]] 00:02:23.600 + sudo dmesg -T 00:02:23.600 + sudo dmesg --clear 00:02:23.600 + dmesg_pid=5023 00:02:23.600 + [[ Fedora Linux == FreeBSD ]] 00:02:23.600 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.600 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.600 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:23.600 + sudo dmesg -Tw 00:02:23.600 + [[ -x /usr/src/fio-static/fio ]] 00:02:23.600 + export FIO_BIN=/usr/src/fio-static/fio 00:02:23.600 + FIO_BIN=/usr/src/fio-static/fio 00:02:23.600 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:23.600 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:23.600 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:23.600 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.600 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.601 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:23.601 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.601 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.601 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.601 Test configuration: 00:02:23.601 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.601 SPDK_TEST_NVME=1 00:02:23.601 SPDK_TEST_FTL=1 00:02:23.601 SPDK_TEST_ISAL=1 00:02:23.601 SPDK_RUN_ASAN=1 00:02:23.601 SPDK_RUN_UBSAN=1 00:02:23.601 SPDK_TEST_XNVME=1 00:02:23.601 SPDK_TEST_NVME_FDP=1 00:02:23.601 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.601 RUN_NIGHTLY=1 21:33:42 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:23.601 21:33:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:23.601 21:33:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:23.601 21:33:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:23.601 21:33:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.601 21:33:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.601 21:33:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.601 21:33:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.601 21:33:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.601 21:33:42 -- paths/export.sh@5 -- $ export PATH 00:02:23.601 21:33:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.601 21:33:42 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:23.859 21:33:42 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:23.859 21:33:42 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727645622.XXXXXX 00:02:23.859 21:33:42 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727645622.6yFRU8 00:02:23.859 21:33:42 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:23.859 21:33:42 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:02:23.859 21:33:42 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:23.859 21:33:42 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:23.859 21:33:42 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:23.859 21:33:42 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:23.859 21:33:42 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:23.859 21:33:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.859 21:33:42 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:23.859 21:33:42 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:23.859 21:33:42 -- pm/common@17 -- $ local monitor 00:02:23.859 21:33:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.859 21:33:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.859 21:33:42 -- pm/common@25 -- $ sleep 1 00:02:23.859 21:33:42 -- pm/common@21 -- $ date +%s 00:02:23.859 21:33:42 -- pm/common@21 -- $ date +%s 00:02:23.859 21:33:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727645622 00:02:23.859 21:33:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727645622 00:02:23.859 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727645622_collect-vmstat.pm.log 00:02:23.859 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727645622_collect-cpu-load.pm.log 00:02:24.794 21:33:43 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:24.794 21:33:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:24.794 21:33:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:24.794 21:33:43 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:24.794 21:33:43 -- spdk/autobuild.sh@16 -- $ date -u 00:02:24.794 Sun Sep 29 09:33:43 PM UTC 2024 00:02:24.794 21:33:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:24.794 v25.01-pre-17-g09cc66129 00:02:24.795 21:33:43 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:24.795 21:33:43 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:24.795 21:33:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:24.795 21:33:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.795 21:33:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.795 ************************************ 00:02:24.795 START TEST asan 00:02:24.795 ************************************ 00:02:24.795 using asan 00:02:24.795 21:33:43 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:24.795 00:02:24.795 real 0m0.000s 00:02:24.795 user 0m0.000s 00:02:24.795 sys 0m0.000s 00:02:24.795 21:33:43 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.795 ************************************ 00:02:24.795 END TEST asan 00:02:24.795 21:33:43 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.795 ************************************ 00:02:24.795 21:33:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:24.795 21:33:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:24.795 21:33:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:24.795 21:33:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.795 21:33:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.795 ************************************ 00:02:24.795 START TEST ubsan 00:02:24.795 ************************************ 00:02:24.795 using ubsan 00:02:24.795 21:33:43 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:24.795 00:02:24.795 real 0m0.000s 00:02:24.795 user 0m0.000s 00:02:24.795 sys 0m0.000s 00:02:24.795 21:33:43 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.795 21:33:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.795 ************************************ 00:02:24.795 END TEST ubsan 00:02:24.795 ************************************ 00:02:24.795 21:33:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:24.795 21:33:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:24.795 21:33:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:24.795 21:33:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:24.795 21:33:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:24.795 21:33:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:24.795 21:33:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:24.795 21:33:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:24.795 21:33:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:25.054 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:25.054 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:25.314 Using 'verbs' RDMA provider 00:02:36.287 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:46.251 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:46.251 Creating mk/config.mk...done. 00:02:46.251 Creating mk/cc.flags.mk...done. 00:02:46.251 Type 'make' to build. 00:02:46.252 21:34:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:46.252 21:34:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:46.252 21:34:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:46.252 21:34:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.513 ************************************ 00:02:46.513 START TEST make 00:02:46.513 ************************************ 00:02:46.513 21:34:05 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:46.513 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:46.513 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:46.513 meson setup builddir \ 00:02:46.513 -Dwith-libaio=enabled \ 00:02:46.513 -Dwith-liburing=enabled \ 00:02:46.513 -Dwith-libvfn=disabled \ 00:02:46.513 -Dwith-spdk=false && \ 00:02:46.513 meson compile -C builddir && \ 00:02:46.513 cd -) 00:02:46.513 make[1]: Nothing to be done for 'all'. 00:02:49.053 The Meson build system 00:02:49.053 Version: 1.5.0 00:02:49.053 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:49.053 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:49.053 Build type: native build 00:02:49.054 Project name: xnvme 00:02:49.054 Project version: 0.7.3 00:02:49.054 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:49.054 C linker for the host machine: cc ld.bfd 2.40-14 00:02:49.054 Host machine cpu family: x86_64 00:02:49.054 Host machine cpu: x86_64 00:02:49.054 Message: host_machine.system: linux 00:02:49.054 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:49.054 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:49.054 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:49.054 Run-time dependency threads found: YES 00:02:49.054 Has header "setupapi.h" : NO 00:02:49.054 Has header "linux/blkzoned.h" : YES 00:02:49.054 Has header "linux/blkzoned.h" : YES (cached) 00:02:49.054 Has header "libaio.h" : YES 00:02:49.054 Library aio found: YES 00:02:49.054 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:49.054 Run-time dependency liburing found: YES 2.2 00:02:49.054 Dependency libvfn skipped: feature with-libvfn disabled 00:02:49.054 Run-time dependency appleframeworks found: NO (tried framework) 00:02:49.054 Run-time dependency appleframeworks found: NO (tried framework) 00:02:49.054 Configuring xnvme_config.h using configuration 00:02:49.054 Configuring xnvme.spec using configuration 00:02:49.054 Run-time dependency bash-completion found: YES 2.11 00:02:49.054 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:49.054 Program cp found: YES (/usr/bin/cp) 00:02:49.054 Has header "winsock2.h" : NO 00:02:49.054 Has header "dbghelp.h" : NO 00:02:49.054 Library rpcrt4 found: NO 00:02:49.054 Library rt found: YES 00:02:49.054 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:49.054 Found CMake: /usr/bin/cmake (3.27.7) 00:02:49.054 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:49.054 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:49.054 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:49.054 Build targets in project: 32 00:02:49.054 00:02:49.054 xnvme 0.7.3 00:02:49.054 00:02:49.054 User defined options 00:02:49.054 with-libaio : enabled 00:02:49.054 with-liburing: enabled 00:02:49.054 with-libvfn : disabled 00:02:49.054 with-spdk : false 00:02:49.054 00:02:49.054 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.054 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:49.054 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:49.054 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:49.311 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:49.311 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:49.311 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:49.311 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:49.311 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:49.311 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:49.311 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:49.311 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:49.311 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:49.311 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:49.311 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:49.311 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:49.311 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:49.311 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:49.311 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:49.311 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:49.311 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:49.311 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:49.311 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:49.311 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:49.311 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:49.570 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:49.570 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:49.570 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:49.570 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:49.570 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:49.570 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:49.570 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:49.570 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:49.570 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:49.570 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:49.570 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:49.570 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:49.570 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:49.570 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:49.570 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:49.570 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:49.570 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:49.570 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:49.570 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:49.570 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:49.570 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:49.570 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:49.570 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:49.570 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:49.570 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:49.570 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:49.570 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:49.570 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:49.570 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:49.570 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:49.570 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:49.570 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:49.570 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:49.570 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:49.570 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:49.570 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:49.828 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:49.828 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:49.828 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:49.828 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:49.828 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:49.828 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:49.828 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:49.828 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:49.828 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:49.828 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:49.828 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:49.828 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:49.828 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:49.828 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:49.828 [74/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:49.828 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:49.828 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:49.828 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:49.828 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:49.828 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:49.828 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:49.828 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:50.086 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:50.086 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:50.086 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:50.086 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:50.086 [86/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:50.086 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:50.086 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:50.086 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:50.086 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:50.086 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:50.086 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:50.086 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:50.086 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:50.086 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:50.086 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:50.086 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:50.086 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:50.086 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:50.086 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:50.086 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:50.086 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:50.086 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:50.086 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:50.086 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:50.086 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:50.086 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:50.086 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:50.345 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:50.345 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:50.345 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:50.345 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:50.345 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:50.345 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:50.345 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:50.345 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:50.345 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:50.345 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:50.345 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:50.345 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:50.345 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:50.345 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:50.345 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:50.345 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:50.345 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:50.345 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:50.345 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:50.345 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:50.345 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:50.345 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:50.345 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:50.345 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:50.345 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:50.345 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:50.345 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:50.345 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:50.603 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:50.603 [138/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:50.603 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:50.603 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:50.603 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:50.603 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:50.603 [143/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:50.603 [144/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:50.603 [145/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:50.603 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:50.603 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:50.603 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:50.603 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:50.603 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:50.603 [151/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:50.603 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:50.861 [153/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:50.861 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:50.861 [155/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:50.861 [156/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:50.861 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:50.861 [158/203] Linking target lib/libxnvme.so 00:02:50.861 [159/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:50.861 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:50.861 [161/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:50.861 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:50.861 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:50.861 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:50.861 [165/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:50.861 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:50.861 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:50.861 [168/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:50.861 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:50.861 [170/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:51.119 [171/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:51.119 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:51.119 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:51.119 [174/203] Linking static target lib/libxnvme.a 00:02:51.119 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:51.119 [176/203] Linking target tests/xnvme_tests_lblk 00:02:51.119 [177/203] Linking target tests/xnvme_tests_scc 00:02:51.119 [178/203] Linking target tests/xnvme_tests_cli 00:02:51.119 [179/203] Linking target tests/xnvme_tests_enum 00:02:51.119 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:02:51.119 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:51.119 [182/203] Linking target tests/xnvme_tests_ioworker 00:02:51.119 [183/203] Linking target tests/xnvme_tests_znd_append 00:02:51.119 [184/203] Linking target tests/xnvme_tests_buf 00:02:51.119 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:51.119 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:51.119 [187/203] Linking target tests/xnvme_tests_znd_state 00:02:51.119 [188/203] Linking target tests/xnvme_tests_map 00:02:51.119 [189/203] Linking target examples/xnvme_dev 00:02:51.119 [190/203] Linking target examples/xnvme_enum 00:02:51.119 [191/203] Linking target tools/lblk 00:02:51.119 [192/203] Linking target tools/xdd 00:02:51.119 [193/203] Linking target tools/zoned 00:02:51.119 [194/203] Linking target tools/xnvme 00:02:51.119 [195/203] Linking target tools/kvs 00:02:51.119 [196/203] Linking target tests/xnvme_tests_kvs 00:02:51.119 [197/203] Linking target tools/xnvme_file 00:02:51.119 [198/203] Linking target examples/zoned_io_sync 00:02:51.119 [199/203] Linking target examples/xnvme_single_sync 00:02:51.119 [200/203] Linking target examples/xnvme_io_async 00:02:51.119 [201/203] Linking target examples/zoned_io_async 00:02:51.119 [202/203] Linking target examples/xnvme_hello 00:02:51.119 [203/203] Linking target examples/xnvme_single_async 00:02:51.119 INFO: autodetecting backend as ninja 00:02:51.119 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:51.119 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:56.390 The Meson build system 00:02:56.390 Version: 1.5.0 00:02:56.390 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:56.390 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:56.390 Build type: native build 00:02:56.390 Program cat found: YES (/usr/bin/cat) 00:02:56.390 Project name: DPDK 00:02:56.390 Project version: 24.03.0 00:02:56.390 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:56.390 C linker for the host machine: cc ld.bfd 2.40-14 00:02:56.390 Host machine cpu family: x86_64 00:02:56.390 Host machine cpu: x86_64 00:02:56.390 Message: ## Building in Developer Mode ## 00:02:56.390 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:56.390 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:56.390 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:56.390 Program python3 found: YES (/usr/bin/python3) 00:02:56.390 Program cat found: YES (/usr/bin/cat) 00:02:56.390 Compiler for C supports arguments -march=native: YES 00:02:56.390 Checking for size of "void *" : 8 00:02:56.390 Checking for size of "void *" : 8 (cached) 00:02:56.390 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:56.390 Library m found: YES 00:02:56.390 Library numa found: YES 00:02:56.390 Has header "numaif.h" : YES 00:02:56.390 Library fdt found: NO 00:02:56.390 Library execinfo found: NO 00:02:56.390 Has header "execinfo.h" : YES 00:02:56.390 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:56.390 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:56.390 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:56.390 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:56.390 Run-time dependency openssl found: YES 3.1.1 00:02:56.390 Run-time dependency libpcap found: YES 1.10.4 00:02:56.390 Has header "pcap.h" with dependency libpcap: YES 00:02:56.390 Compiler for C supports arguments -Wcast-qual: YES 00:02:56.390 Compiler for C supports arguments -Wdeprecated: YES 00:02:56.390 Compiler for C supports arguments -Wformat: YES 00:02:56.390 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:56.390 Compiler for C supports arguments -Wformat-security: NO 00:02:56.390 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.390 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:56.390 Compiler for C supports arguments -Wnested-externs: YES 00:02:56.390 Compiler for C supports arguments -Wold-style-definition: YES 00:02:56.390 Compiler for C supports arguments -Wpointer-arith: YES 00:02:56.390 Compiler for C supports arguments -Wsign-compare: YES 00:02:56.390 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:56.390 Compiler for C supports arguments -Wundef: YES 00:02:56.390 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.390 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:56.390 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:56.390 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.390 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:56.391 Program objdump found: YES (/usr/bin/objdump) 00:02:56.391 Compiler for C supports arguments -mavx512f: YES 00:02:56.391 Checking if "AVX512 checking" compiles: YES 00:02:56.391 Fetching value of define "__SSE4_2__" : 1 00:02:56.391 Fetching value of define "__AES__" : 1 00:02:56.391 Fetching value of define "__AVX__" : 1 00:02:56.391 Fetching value of define "__AVX2__" : 1 00:02:56.391 Fetching value of define "__AVX512BW__" : 1 00:02:56.391 Fetching value of define "__AVX512CD__" : 1 00:02:56.391 Fetching value of define "__AVX512DQ__" : 1 00:02:56.391 Fetching value of define "__AVX512F__" : 1 00:02:56.391 Fetching value of define "__AVX512VL__" : 1 00:02:56.391 Fetching value of define "__PCLMUL__" : 1 00:02:56.391 Fetching value of define "__RDRND__" : 1 00:02:56.391 Fetching value of define "__RDSEED__" : 1 00:02:56.391 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:56.391 Fetching value of define "__znver1__" : (undefined) 00:02:56.391 Fetching value of define "__znver2__" : (undefined) 00:02:56.391 Fetching value of define "__znver3__" : (undefined) 00:02:56.391 Fetching value of define "__znver4__" : (undefined) 00:02:56.391 Library asan found: YES 00:02:56.391 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:56.391 Message: lib/log: Defining dependency "log" 00:02:56.391 Message: lib/kvargs: Defining dependency "kvargs" 00:02:56.391 Message: lib/telemetry: Defining dependency "telemetry" 00:02:56.391 Library rt found: YES 00:02:56.391 Checking for function "getentropy" : NO 00:02:56.391 Message: lib/eal: Defining dependency "eal" 00:02:56.391 Message: lib/ring: Defining dependency "ring" 00:02:56.391 Message: lib/rcu: Defining dependency "rcu" 00:02:56.391 Message: lib/mempool: Defining dependency "mempool" 00:02:56.391 Message: lib/mbuf: Defining dependency "mbuf" 00:02:56.391 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:56.391 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:56.391 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:56.391 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:56.391 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:56.391 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:56.391 Compiler for C supports arguments -mpclmul: YES 00:02:56.391 Compiler for C supports arguments -maes: YES 00:02:56.391 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:56.391 Compiler for C supports arguments -mavx512bw: YES 00:02:56.391 Compiler for C supports arguments -mavx512dq: YES 00:02:56.391 Compiler for C supports arguments -mavx512vl: YES 00:02:56.391 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:56.391 Compiler for C supports arguments -mavx2: YES 00:02:56.391 Compiler for C supports arguments -mavx: YES 00:02:56.391 Message: lib/net: Defining dependency "net" 00:02:56.391 Message: lib/meter: Defining dependency "meter" 00:02:56.391 Message: lib/ethdev: Defining dependency "ethdev" 00:02:56.391 Message: lib/pci: Defining dependency "pci" 00:02:56.391 Message: lib/cmdline: Defining dependency "cmdline" 00:02:56.391 Message: lib/hash: Defining dependency "hash" 00:02:56.391 Message: lib/timer: Defining dependency "timer" 00:02:56.391 Message: lib/compressdev: Defining dependency "compressdev" 00:02:56.391 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:56.391 Message: lib/dmadev: Defining dependency "dmadev" 00:02:56.391 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:56.391 Message: lib/power: Defining dependency "power" 00:02:56.391 Message: lib/reorder: Defining dependency "reorder" 00:02:56.391 Message: lib/security: Defining dependency "security" 00:02:56.391 Has header "linux/userfaultfd.h" : YES 00:02:56.391 Has header "linux/vduse.h" : YES 00:02:56.391 Message: lib/vhost: Defining dependency "vhost" 00:02:56.391 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:56.391 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:56.391 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:56.391 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:56.391 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:56.391 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:56.391 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:56.391 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:56.391 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:56.391 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:56.391 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:56.391 Configuring doxy-api-html.conf using configuration 00:02:56.391 Configuring doxy-api-man.conf using configuration 00:02:56.391 Program mandb found: YES (/usr/bin/mandb) 00:02:56.391 Program sphinx-build found: NO 00:02:56.391 Configuring rte_build_config.h using configuration 00:02:56.391 Message: 00:02:56.391 ================= 00:02:56.391 Applications Enabled 00:02:56.391 ================= 00:02:56.391 00:02:56.391 apps: 00:02:56.391 00:02:56.391 00:02:56.391 Message: 00:02:56.391 ================= 00:02:56.391 Libraries Enabled 00:02:56.391 ================= 00:02:56.391 00:02:56.391 libs: 00:02:56.391 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:56.391 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:56.391 cryptodev, dmadev, power, reorder, security, vhost, 00:02:56.391 00:02:56.391 Message: 00:02:56.391 =============== 00:02:56.391 Drivers Enabled 00:02:56.391 =============== 00:02:56.391 00:02:56.391 common: 00:02:56.391 00:02:56.391 bus: 00:02:56.391 pci, vdev, 00:02:56.391 mempool: 00:02:56.391 ring, 00:02:56.391 dma: 00:02:56.391 00:02:56.391 net: 00:02:56.391 00:02:56.391 crypto: 00:02:56.391 00:02:56.391 compress: 00:02:56.391 00:02:56.391 vdpa: 00:02:56.391 00:02:56.391 00:02:56.391 Message: 00:02:56.391 ================= 00:02:56.391 Content Skipped 00:02:56.391 ================= 00:02:56.391 00:02:56.391 apps: 00:02:56.391 dumpcap: explicitly disabled via build config 00:02:56.391 graph: explicitly disabled via build config 00:02:56.391 pdump: explicitly disabled via build config 00:02:56.391 proc-info: explicitly disabled via build config 00:02:56.391 test-acl: explicitly disabled via build config 00:02:56.391 test-bbdev: explicitly disabled via build config 00:02:56.391 test-cmdline: explicitly disabled via build config 00:02:56.391 test-compress-perf: explicitly disabled via build config 00:02:56.391 test-crypto-perf: explicitly disabled via build config 00:02:56.391 test-dma-perf: explicitly disabled via build config 00:02:56.391 test-eventdev: explicitly disabled via build config 00:02:56.391 test-fib: explicitly disabled via build config 00:02:56.391 test-flow-perf: explicitly disabled via build config 00:02:56.391 test-gpudev: explicitly disabled via build config 00:02:56.391 test-mldev: explicitly disabled via build config 00:02:56.391 test-pipeline: explicitly disabled via build config 00:02:56.391 test-pmd: explicitly disabled via build config 00:02:56.391 test-regex: explicitly disabled via build config 00:02:56.391 test-sad: explicitly disabled via build config 00:02:56.391 test-security-perf: explicitly disabled via build config 00:02:56.392 00:02:56.392 libs: 00:02:56.392 argparse: explicitly disabled via build config 00:02:56.392 metrics: explicitly disabled via build config 00:02:56.392 acl: explicitly disabled via build config 00:02:56.392 bbdev: explicitly disabled via build config 00:02:56.392 bitratestats: explicitly disabled via build config 00:02:56.392 bpf: explicitly disabled via build config 00:02:56.392 cfgfile: explicitly disabled via build config 00:02:56.392 distributor: explicitly disabled via build config 00:02:56.392 efd: explicitly disabled via build config 00:02:56.392 eventdev: explicitly disabled via build config 00:02:56.392 dispatcher: explicitly disabled via build config 00:02:56.392 gpudev: explicitly disabled via build config 00:02:56.392 gro: explicitly disabled via build config 00:02:56.392 gso: explicitly disabled via build config 00:02:56.392 ip_frag: explicitly disabled via build config 00:02:56.392 jobstats: explicitly disabled via build config 00:02:56.392 latencystats: explicitly disabled via build config 00:02:56.392 lpm: explicitly disabled via build config 00:02:56.392 member: explicitly disabled via build config 00:02:56.392 pcapng: explicitly disabled via build config 00:02:56.392 rawdev: explicitly disabled via build config 00:02:56.392 regexdev: explicitly disabled via build config 00:02:56.392 mldev: explicitly disabled via build config 00:02:56.392 rib: explicitly disabled via build config 00:02:56.392 sched: explicitly disabled via build config 00:02:56.392 stack: explicitly disabled via build config 00:02:56.392 ipsec: explicitly disabled via build config 00:02:56.392 pdcp: explicitly disabled via build config 00:02:56.392 fib: explicitly disabled via build config 00:02:56.392 port: explicitly disabled via build config 00:02:56.392 pdump: explicitly disabled via build config 00:02:56.392 table: explicitly disabled via build config 00:02:56.392 pipeline: explicitly disabled via build config 00:02:56.392 graph: explicitly disabled via build config 00:02:56.392 node: explicitly disabled via build config 00:02:56.392 00:02:56.392 drivers: 00:02:56.392 common/cpt: not in enabled drivers build config 00:02:56.392 common/dpaax: not in enabled drivers build config 00:02:56.392 common/iavf: not in enabled drivers build config 00:02:56.392 common/idpf: not in enabled drivers build config 00:02:56.392 common/ionic: not in enabled drivers build config 00:02:56.392 common/mvep: not in enabled drivers build config 00:02:56.392 common/octeontx: not in enabled drivers build config 00:02:56.392 bus/auxiliary: not in enabled drivers build config 00:02:56.392 bus/cdx: not in enabled drivers build config 00:02:56.392 bus/dpaa: not in enabled drivers build config 00:02:56.392 bus/fslmc: not in enabled drivers build config 00:02:56.392 bus/ifpga: not in enabled drivers build config 00:02:56.392 bus/platform: not in enabled drivers build config 00:02:56.392 bus/uacce: not in enabled drivers build config 00:02:56.392 bus/vmbus: not in enabled drivers build config 00:02:56.392 common/cnxk: not in enabled drivers build config 00:02:56.392 common/mlx5: not in enabled drivers build config 00:02:56.392 common/nfp: not in enabled drivers build config 00:02:56.392 common/nitrox: not in enabled drivers build config 00:02:56.392 common/qat: not in enabled drivers build config 00:02:56.392 common/sfc_efx: not in enabled drivers build config 00:02:56.392 mempool/bucket: not in enabled drivers build config 00:02:56.392 mempool/cnxk: not in enabled drivers build config 00:02:56.392 mempool/dpaa: not in enabled drivers build config 00:02:56.392 mempool/dpaa2: not in enabled drivers build config 00:02:56.392 mempool/octeontx: not in enabled drivers build config 00:02:56.392 mempool/stack: not in enabled drivers build config 00:02:56.392 dma/cnxk: not in enabled drivers build config 00:02:56.392 dma/dpaa: not in enabled drivers build config 00:02:56.392 dma/dpaa2: not in enabled drivers build config 00:02:56.392 dma/hisilicon: not in enabled drivers build config 00:02:56.392 dma/idxd: not in enabled drivers build config 00:02:56.392 dma/ioat: not in enabled drivers build config 00:02:56.392 dma/skeleton: not in enabled drivers build config 00:02:56.392 net/af_packet: not in enabled drivers build config 00:02:56.392 net/af_xdp: not in enabled drivers build config 00:02:56.392 net/ark: not in enabled drivers build config 00:02:56.392 net/atlantic: not in enabled drivers build config 00:02:56.392 net/avp: not in enabled drivers build config 00:02:56.392 net/axgbe: not in enabled drivers build config 00:02:56.392 net/bnx2x: not in enabled drivers build config 00:02:56.392 net/bnxt: not in enabled drivers build config 00:02:56.392 net/bonding: not in enabled drivers build config 00:02:56.392 net/cnxk: not in enabled drivers build config 00:02:56.392 net/cpfl: not in enabled drivers build config 00:02:56.392 net/cxgbe: not in enabled drivers build config 00:02:56.392 net/dpaa: not in enabled drivers build config 00:02:56.392 net/dpaa2: not in enabled drivers build config 00:02:56.392 net/e1000: not in enabled drivers build config 00:02:56.392 net/ena: not in enabled drivers build config 00:02:56.392 net/enetc: not in enabled drivers build config 00:02:56.392 net/enetfec: not in enabled drivers build config 00:02:56.392 net/enic: not in enabled drivers build config 00:02:56.392 net/failsafe: not in enabled drivers build config 00:02:56.392 net/fm10k: not in enabled drivers build config 00:02:56.392 net/gve: not in enabled drivers build config 00:02:56.392 net/hinic: not in enabled drivers build config 00:02:56.392 net/hns3: not in enabled drivers build config 00:02:56.392 net/i40e: not in enabled drivers build config 00:02:56.392 net/iavf: not in enabled drivers build config 00:02:56.392 net/ice: not in enabled drivers build config 00:02:56.392 net/idpf: not in enabled drivers build config 00:02:56.392 net/igc: not in enabled drivers build config 00:02:56.392 net/ionic: not in enabled drivers build config 00:02:56.392 net/ipn3ke: not in enabled drivers build config 00:02:56.392 net/ixgbe: not in enabled drivers build config 00:02:56.392 net/mana: not in enabled drivers build config 00:02:56.392 net/memif: not in enabled drivers build config 00:02:56.392 net/mlx4: not in enabled drivers build config 00:02:56.392 net/mlx5: not in enabled drivers build config 00:02:56.392 net/mvneta: not in enabled drivers build config 00:02:56.392 net/mvpp2: not in enabled drivers build config 00:02:56.392 net/netvsc: not in enabled drivers build config 00:02:56.392 net/nfb: not in enabled drivers build config 00:02:56.392 net/nfp: not in enabled drivers build config 00:02:56.392 net/ngbe: not in enabled drivers build config 00:02:56.392 net/null: not in enabled drivers build config 00:02:56.392 net/octeontx: not in enabled drivers build config 00:02:56.392 net/octeon_ep: not in enabled drivers build config 00:02:56.392 net/pcap: not in enabled drivers build config 00:02:56.392 net/pfe: not in enabled drivers build config 00:02:56.392 net/qede: not in enabled drivers build config 00:02:56.392 net/ring: not in enabled drivers build config 00:02:56.392 net/sfc: not in enabled drivers build config 00:02:56.392 net/softnic: not in enabled drivers build config 00:02:56.392 net/tap: not in enabled drivers build config 00:02:56.392 net/thunderx: not in enabled drivers build config 00:02:56.392 net/txgbe: not in enabled drivers build config 00:02:56.392 net/vdev_netvsc: not in enabled drivers build config 00:02:56.392 net/vhost: not in enabled drivers build config 00:02:56.392 net/virtio: not in enabled drivers build config 00:02:56.392 net/vmxnet3: not in enabled drivers build config 00:02:56.392 raw/*: missing internal dependency, "rawdev" 00:02:56.392 crypto/armv8: not in enabled drivers build config 00:02:56.392 crypto/bcmfs: not in enabled drivers build config 00:02:56.392 crypto/caam_jr: not in enabled drivers build config 00:02:56.392 crypto/ccp: not in enabled drivers build config 00:02:56.392 crypto/cnxk: not in enabled drivers build config 00:02:56.392 crypto/dpaa_sec: not in enabled drivers build config 00:02:56.392 crypto/dpaa2_sec: not in enabled drivers build config 00:02:56.392 crypto/ipsec_mb: not in enabled drivers build config 00:02:56.392 crypto/mlx5: not in enabled drivers build config 00:02:56.392 crypto/mvsam: not in enabled drivers build config 00:02:56.392 crypto/nitrox: not in enabled drivers build config 00:02:56.392 crypto/null: not in enabled drivers build config 00:02:56.392 crypto/octeontx: not in enabled drivers build config 00:02:56.392 crypto/openssl: not in enabled drivers build config 00:02:56.392 crypto/scheduler: not in enabled drivers build config 00:02:56.392 crypto/uadk: not in enabled drivers build config 00:02:56.392 crypto/virtio: not in enabled drivers build config 00:02:56.392 compress/isal: not in enabled drivers build config 00:02:56.392 compress/mlx5: not in enabled drivers build config 00:02:56.392 compress/nitrox: not in enabled drivers build config 00:02:56.392 compress/octeontx: not in enabled drivers build config 00:02:56.392 compress/zlib: not in enabled drivers build config 00:02:56.392 regex/*: missing internal dependency, "regexdev" 00:02:56.392 ml/*: missing internal dependency, "mldev" 00:02:56.393 vdpa/ifc: not in enabled drivers build config 00:02:56.393 vdpa/mlx5: not in enabled drivers build config 00:02:56.393 vdpa/nfp: not in enabled drivers build config 00:02:56.393 vdpa/sfc: not in enabled drivers build config 00:02:56.393 event/*: missing internal dependency, "eventdev" 00:02:56.393 baseband/*: missing internal dependency, "bbdev" 00:02:56.393 gpu/*: missing internal dependency, "gpudev" 00:02:56.393 00:02:56.393 00:02:56.655 Build targets in project: 84 00:02:56.655 00:02:56.655 DPDK 24.03.0 00:02:56.655 00:02:56.655 User defined options 00:02:56.655 buildtype : debug 00:02:56.655 default_library : shared 00:02:56.655 libdir : lib 00:02:56.655 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:56.655 b_sanitize : address 00:02:56.655 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:56.655 c_link_args : 00:02:56.655 cpu_instruction_set: native 00:02:56.655 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:56.655 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:56.655 enable_docs : false 00:02:56.655 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:56.655 enable_kmods : false 00:02:56.655 max_lcores : 128 00:02:56.655 tests : false 00:02:56.655 00:02:56.655 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.229 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:57.229 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:57.229 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:57.229 [3/267] Linking static target lib/librte_kvargs.a 00:02:57.229 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:57.229 [5/267] Linking static target lib/librte_log.a 00:02:57.229 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:57.490 [7/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.490 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:57.490 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:57.490 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:57.490 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:57.490 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:57.490 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:57.490 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:57.490 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:57.750 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:57.750 [17/267] Linking static target lib/librte_telemetry.a 00:02:57.750 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:57.750 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.010 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:58.010 [21/267] Linking target lib/librte_log.so.24.1 00:02:58.010 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:58.010 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:58.010 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:58.010 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:58.010 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:58.010 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:58.010 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:58.010 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:58.010 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:58.010 [31/267] Linking target lib/librte_kvargs.so.24.1 00:02:58.272 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:58.272 [33/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:58.272 [34/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.272 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:58.272 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:58.272 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:58.272 [38/267] Linking target lib/librte_telemetry.so.24.1 00:02:58.534 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:58.534 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:58.534 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:58.534 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:58.534 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:58.534 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:58.534 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:58.534 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:58.796 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:58.796 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:58.796 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:58.796 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:58.796 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:58.796 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:58.796 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:59.057 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:59.057 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:59.057 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:59.057 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:59.057 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:59.057 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:59.316 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:59.316 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:59.316 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:59.316 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:59.317 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:59.317 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:59.576 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:59.576 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:59.576 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:59.576 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:59.576 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:59.576 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:59.834 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:59.834 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:59.834 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:59.834 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:59.834 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:59.834 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:59.834 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:59.834 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:00.093 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:00.093 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:00.093 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:00.093 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:00.352 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:00.352 [85/267] Linking static target lib/librte_eal.a 00:03:00.352 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:00.352 [87/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:00.352 [88/267] Linking static target lib/librte_ring.a 00:03:00.352 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:00.352 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.611 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:00.611 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:00.611 [93/267] Linking static target lib/librte_mempool.a 00:03:00.611 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:00.611 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:00.611 [96/267] Linking static target lib/librte_rcu.a 00:03:00.611 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:00.870 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.870 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:00.870 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:01.127 [101/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:01.127 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:01.127 [103/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.127 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:01.127 [105/267] Linking static target lib/librte_meter.a 00:03:01.127 [106/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:01.127 [107/267] Linking static target lib/librte_mbuf.a 00:03:01.127 [108/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:01.127 [109/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:01.127 [110/267] Linking static target lib/librte_net.a 00:03:01.384 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:01.384 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:01.384 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:01.384 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.384 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:01.641 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.641 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.641 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:01.899 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:01.899 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:01.899 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:01.899 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.157 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:02.157 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:02.157 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:02.157 [126/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:02.157 [127/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:02.157 [128/267] Linking static target lib/librte_pci.a 00:03:02.157 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:02.415 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:02.415 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:02.415 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:02.415 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:02.415 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:02.415 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:02.415 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:02.415 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:02.415 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:02.415 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:02.415 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:02.415 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:02.674 [142/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.674 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:02.674 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:02.674 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:02.674 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:02.674 [147/267] Linking static target lib/librte_cmdline.a 00:03:02.674 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:02.933 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:02.933 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:02.933 [151/267] Linking static target lib/librte_timer.a 00:03:02.933 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:02.933 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:02.933 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:03.191 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.191 [156/267] Linking static target lib/librte_ethdev.a 00:03:03.191 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:03.191 [158/267] Linking static target lib/librte_compressdev.a 00:03:03.450 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:03.450 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:03.450 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.450 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:03.450 [163/267] Linking static target lib/librte_hash.a 00:03:03.450 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:03.450 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:03.708 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:03.708 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:03.708 [168/267] Linking static target lib/librte_dmadev.a 00:03:03.708 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:03.708 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:03.708 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:03.967 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:03.967 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.967 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.225 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:04.225 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:04.225 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:04.225 [178/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:04.225 [179/267] Linking static target lib/librte_cryptodev.a 00:03:04.225 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.225 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:04.225 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:04.225 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.483 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:04.483 [185/267] Linking static target lib/librte_power.a 00:03:04.483 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:04.741 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:04.741 [188/267] Linking static target lib/librte_reorder.a 00:03:04.741 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.741 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:04.741 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:04.741 [192/267] Linking static target lib/librte_security.a 00:03:04.999 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.257 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:05.257 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:05.257 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.515 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:05.515 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:05.515 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.515 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:05.773 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:05.773 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:05.773 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:06.032 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:06.032 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:06.032 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:06.032 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:06.032 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:06.032 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:06.032 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.290 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:06.290 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.290 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:06.290 [214/267] Linking static target drivers/librte_bus_vdev.a 00:03:06.290 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:06.290 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.290 [217/267] Linking static target drivers/librte_bus_pci.a 00:03:06.290 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.290 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:06.290 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:06.548 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:06.548 [222/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.548 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.548 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.548 [225/267] Linking static target drivers/librte_mempool_ring.a 00:03:06.548 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.805 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:08.180 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.180 [229/267] Linking target lib/librte_eal.so.24.1 00:03:08.180 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:08.180 [231/267] Linking target lib/librte_pci.so.24.1 00:03:08.180 [232/267] Linking target lib/librte_dmadev.so.24.1 00:03:08.180 [233/267] Linking target lib/librte_meter.so.24.1 00:03:08.180 [234/267] Linking target lib/librte_ring.so.24.1 00:03:08.180 [235/267] Linking target lib/librte_timer.so.24.1 00:03:08.180 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:08.180 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:08.180 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:08.438 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:08.438 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:08.438 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:08.438 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:08.438 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:08.438 [244/267] Linking target lib/librte_mempool.so.24.1 00:03:08.438 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:08.438 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:08.438 [247/267] Linking target lib/librte_mbuf.so.24.1 00:03:08.438 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:08.696 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:08.696 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:08.696 [251/267] Linking target lib/librte_compressdev.so.24.1 00:03:08.696 [252/267] Linking target lib/librte_net.so.24.1 00:03:08.696 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:08.696 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:08.696 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:08.696 [256/267] Linking target lib/librte_hash.so.24.1 00:03:08.696 [257/267] Linking target lib/librte_cmdline.so.24.1 00:03:08.696 [258/267] Linking target lib/librte_security.so.24.1 00:03:08.959 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.959 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:08.959 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:08.959 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:09.222 [263/267] Linking target lib/librte_power.so.24.1 00:03:09.788 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.788 [265/267] Linking static target lib/librte_vhost.a 00:03:11.169 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.169 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:11.170 INFO: autodetecting backend as ninja 00:03:11.170 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:26.044 CC lib/log/log_deprecated.o 00:03:26.044 CC lib/log/log.o 00:03:26.044 CC lib/log/log_flags.o 00:03:26.304 CC lib/ut_mock/mock.o 00:03:26.304 CC lib/ut/ut.o 00:03:26.304 LIB libspdk_ut_mock.a 00:03:26.304 LIB libspdk_log.a 00:03:26.304 SO libspdk_ut_mock.so.6.0 00:03:26.304 LIB libspdk_ut.a 00:03:26.304 SO libspdk_log.so.7.0 00:03:26.304 SO libspdk_ut.so.2.0 00:03:26.304 SYMLINK libspdk_ut_mock.so 00:03:26.304 SYMLINK libspdk_log.so 00:03:26.305 SYMLINK libspdk_ut.so 00:03:26.565 CC lib/util/base64.o 00:03:26.565 CC lib/util/bit_array.o 00:03:26.565 CC lib/util/cpuset.o 00:03:26.565 CC lib/util/crc32c.o 00:03:26.565 CC lib/util/crc32.o 00:03:26.565 CC lib/util/crc16.o 00:03:26.565 CC lib/dma/dma.o 00:03:26.565 CXX lib/trace_parser/trace.o 00:03:26.565 CC lib/ioat/ioat.o 00:03:26.565 CC lib/util/crc32_ieee.o 00:03:26.565 CC lib/util/crc64.o 00:03:26.565 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.565 CC lib/util/dif.o 00:03:26.565 CC lib/util/fd.o 00:03:26.565 LIB libspdk_dma.a 00:03:26.827 SO libspdk_dma.so.5.0 00:03:26.827 CC lib/vfio_user/host/vfio_user.o 00:03:26.827 CC lib/util/fd_group.o 00:03:26.827 CC lib/util/file.o 00:03:26.827 CC lib/util/hexlify.o 00:03:26.827 SYMLINK libspdk_dma.so 00:03:26.827 CC lib/util/iov.o 00:03:26.827 CC lib/util/math.o 00:03:26.827 LIB libspdk_ioat.a 00:03:26.827 SO libspdk_ioat.so.7.0 00:03:26.827 CC lib/util/net.o 00:03:26.827 CC lib/util/pipe.o 00:03:26.827 SYMLINK libspdk_ioat.so 00:03:26.827 CC lib/util/strerror_tls.o 00:03:26.827 CC lib/util/string.o 00:03:26.827 CC lib/util/uuid.o 00:03:26.827 CC lib/util/xor.o 00:03:26.827 LIB libspdk_vfio_user.a 00:03:26.827 SO libspdk_vfio_user.so.5.0 00:03:26.827 CC lib/util/zipf.o 00:03:26.827 CC lib/util/md5.o 00:03:27.088 SYMLINK libspdk_vfio_user.so 00:03:27.088 LIB libspdk_util.a 00:03:27.350 LIB libspdk_trace_parser.a 00:03:27.350 SO libspdk_util.so.10.0 00:03:27.350 SO libspdk_trace_parser.so.6.0 00:03:27.350 SYMLINK libspdk_util.so 00:03:27.350 SYMLINK libspdk_trace_parser.so 00:03:27.610 CC lib/vmd/vmd.o 00:03:27.610 CC lib/vmd/led.o 00:03:27.610 CC lib/json/json_util.o 00:03:27.610 CC lib/json/json_parse.o 00:03:27.610 CC lib/json/json_write.o 00:03:27.610 CC lib/idxd/idxd.o 00:03:27.610 CC lib/conf/conf.o 00:03:27.610 CC lib/rdma_utils/rdma_utils.o 00:03:27.610 CC lib/rdma_provider/common.o 00:03:27.610 CC lib/env_dpdk/env.o 00:03:27.610 CC lib/idxd/idxd_user.o 00:03:27.610 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:27.610 CC lib/idxd/idxd_kernel.o 00:03:27.610 LIB libspdk_conf.a 00:03:27.610 CC lib/env_dpdk/memory.o 00:03:27.610 LIB libspdk_json.a 00:03:27.610 SO libspdk_conf.so.6.0 00:03:27.870 SO libspdk_json.so.6.0 00:03:27.870 LIB libspdk_rdma_utils.a 00:03:27.870 SYMLINK libspdk_conf.so 00:03:27.870 CC lib/env_dpdk/pci.o 00:03:27.870 SO libspdk_rdma_utils.so.1.0 00:03:27.870 CC lib/env_dpdk/init.o 00:03:27.870 SYMLINK libspdk_json.so 00:03:27.870 CC lib/env_dpdk/threads.o 00:03:27.871 LIB libspdk_rdma_provider.a 00:03:27.871 SYMLINK libspdk_rdma_utils.so 00:03:27.871 CC lib/env_dpdk/pci_ioat.o 00:03:27.871 SO libspdk_rdma_provider.so.6.0 00:03:27.871 SYMLINK libspdk_rdma_provider.so 00:03:27.871 CC lib/env_dpdk/pci_virtio.o 00:03:27.871 CC lib/env_dpdk/pci_vmd.o 00:03:27.871 CC lib/env_dpdk/pci_idxd.o 00:03:27.871 CC lib/jsonrpc/jsonrpc_server.o 00:03:28.131 LIB libspdk_vmd.a 00:03:28.131 CC lib/env_dpdk/pci_event.o 00:03:28.131 CC lib/env_dpdk/sigbus_handler.o 00:03:28.131 SO libspdk_vmd.so.6.0 00:03:28.131 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:28.131 CC lib/env_dpdk/pci_dpdk.o 00:03:28.131 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:28.131 SYMLINK libspdk_vmd.so 00:03:28.131 CC lib/jsonrpc/jsonrpc_client.o 00:03:28.131 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:28.131 LIB libspdk_idxd.a 00:03:28.131 SO libspdk_idxd.so.12.1 00:03:28.131 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:28.131 SYMLINK libspdk_idxd.so 00:03:28.393 LIB libspdk_jsonrpc.a 00:03:28.393 SO libspdk_jsonrpc.so.6.0 00:03:28.393 SYMLINK libspdk_jsonrpc.so 00:03:28.654 CC lib/rpc/rpc.o 00:03:28.915 LIB libspdk_rpc.a 00:03:28.915 SO libspdk_rpc.so.6.0 00:03:28.915 LIB libspdk_env_dpdk.a 00:03:28.915 SYMLINK libspdk_rpc.so 00:03:28.915 SO libspdk_env_dpdk.so.15.0 00:03:29.176 SYMLINK libspdk_env_dpdk.so 00:03:29.176 CC lib/trace/trace.o 00:03:29.176 CC lib/trace/trace_flags.o 00:03:29.176 CC lib/trace/trace_rpc.o 00:03:29.176 CC lib/notify/notify.o 00:03:29.176 CC lib/notify/notify_rpc.o 00:03:29.176 CC lib/keyring/keyring.o 00:03:29.176 CC lib/keyring/keyring_rpc.o 00:03:29.176 LIB libspdk_notify.a 00:03:29.176 SO libspdk_notify.so.6.0 00:03:29.176 LIB libspdk_keyring.a 00:03:29.437 SO libspdk_keyring.so.2.0 00:03:29.437 SYMLINK libspdk_notify.so 00:03:29.437 LIB libspdk_trace.a 00:03:29.437 SYMLINK libspdk_keyring.so 00:03:29.437 SO libspdk_trace.so.11.0 00:03:29.437 SYMLINK libspdk_trace.so 00:03:29.703 CC lib/sock/sock_rpc.o 00:03:29.703 CC lib/sock/sock.o 00:03:29.703 CC lib/thread/thread.o 00:03:29.703 CC lib/thread/iobuf.o 00:03:29.971 LIB libspdk_sock.a 00:03:29.971 SO libspdk_sock.so.10.0 00:03:30.228 SYMLINK libspdk_sock.so 00:03:30.228 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:30.228 CC lib/nvme/nvme_ctrlr.o 00:03:30.228 CC lib/nvme/nvme_ns.o 00:03:30.228 CC lib/nvme/nvme_fabric.o 00:03:30.228 CC lib/nvme/nvme_ns_cmd.o 00:03:30.228 CC lib/nvme/nvme_pcie.o 00:03:30.228 CC lib/nvme/nvme_pcie_common.o 00:03:30.228 CC lib/nvme/nvme_qpair.o 00:03:30.228 CC lib/nvme/nvme.o 00:03:30.795 CC lib/nvme/nvme_quirks.o 00:03:30.795 CC lib/nvme/nvme_transport.o 00:03:31.052 CC lib/nvme/nvme_discovery.o 00:03:31.052 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:31.052 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:31.052 CC lib/nvme/nvme_tcp.o 00:03:31.052 CC lib/nvme/nvme_opal.o 00:03:31.052 LIB libspdk_thread.a 00:03:31.052 SO libspdk_thread.so.10.1 00:03:31.052 CC lib/nvme/nvme_io_msg.o 00:03:31.310 SYMLINK libspdk_thread.so 00:03:31.310 CC lib/nvme/nvme_poll_group.o 00:03:31.310 CC lib/nvme/nvme_zns.o 00:03:31.310 CC lib/nvme/nvme_stubs.o 00:03:31.310 CC lib/nvme/nvme_auth.o 00:03:31.569 CC lib/nvme/nvme_cuse.o 00:03:31.569 CC lib/nvme/nvme_rdma.o 00:03:31.828 CC lib/accel/accel.o 00:03:31.828 CC lib/blob/blobstore.o 00:03:31.828 CC lib/init/json_config.o 00:03:31.828 CC lib/init/subsystem.o 00:03:31.828 CC lib/init/subsystem_rpc.o 00:03:32.093 CC lib/accel/accel_rpc.o 00:03:32.093 CC lib/accel/accel_sw.o 00:03:32.093 CC lib/init/rpc.o 00:03:32.093 CC lib/blob/request.o 00:03:32.093 CC lib/blob/zeroes.o 00:03:32.093 LIB libspdk_init.a 00:03:32.355 SO libspdk_init.so.6.0 00:03:32.355 CC lib/blob/blob_bs_dev.o 00:03:32.355 SYMLINK libspdk_init.so 00:03:32.355 CC lib/virtio/virtio.o 00:03:32.355 CC lib/virtio/virtio_vhost_user.o 00:03:32.355 CC lib/event/app.o 00:03:32.355 CC lib/fsdev/fsdev.o 00:03:32.355 CC lib/event/reactor.o 00:03:32.355 CC lib/event/log_rpc.o 00:03:32.618 CC lib/event/app_rpc.o 00:03:32.618 LIB libspdk_accel.a 00:03:32.618 CC lib/event/scheduler_static.o 00:03:32.618 SO libspdk_accel.so.16.0 00:03:32.618 CC lib/fsdev/fsdev_io.o 00:03:32.618 SYMLINK libspdk_accel.so 00:03:32.618 CC lib/virtio/virtio_vfio_user.o 00:03:32.618 CC lib/fsdev/fsdev_rpc.o 00:03:32.618 CC lib/virtio/virtio_pci.o 00:03:32.879 LIB libspdk_nvme.a 00:03:32.879 CC lib/bdev/bdev.o 00:03:32.879 CC lib/bdev/bdev_zone.o 00:03:32.879 LIB libspdk_event.a 00:03:32.879 CC lib/bdev/part.o 00:03:32.879 CC lib/bdev/bdev_rpc.o 00:03:32.879 SO libspdk_event.so.14.0 00:03:32.879 CC lib/bdev/scsi_nvme.o 00:03:32.879 LIB libspdk_virtio.a 00:03:32.879 SO libspdk_virtio.so.7.0 00:03:32.879 SYMLINK libspdk_event.so 00:03:32.879 SO libspdk_nvme.so.14.0 00:03:33.140 LIB libspdk_fsdev.a 00:03:33.140 SYMLINK libspdk_virtio.so 00:03:33.140 SO libspdk_fsdev.so.1.0 00:03:33.140 SYMLINK libspdk_fsdev.so 00:03:33.140 SYMLINK libspdk_nvme.so 00:03:33.402 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:33.973 LIB libspdk_fuse_dispatcher.a 00:03:33.973 SO libspdk_fuse_dispatcher.so.1.0 00:03:33.973 SYMLINK libspdk_fuse_dispatcher.so 00:03:34.545 LIB libspdk_blob.a 00:03:34.545 SO libspdk_blob.so.11.0 00:03:34.807 SYMLINK libspdk_blob.so 00:03:34.807 CC lib/blobfs/blobfs.o 00:03:34.807 CC lib/blobfs/tree.o 00:03:34.807 CC lib/lvol/lvol.o 00:03:35.751 LIB libspdk_bdev.a 00:03:35.751 SO libspdk_bdev.so.16.0 00:03:35.751 LIB libspdk_lvol.a 00:03:35.751 SO libspdk_lvol.so.10.0 00:03:35.751 LIB libspdk_blobfs.a 00:03:35.751 SYMLINK libspdk_bdev.so 00:03:35.751 SO libspdk_blobfs.so.10.0 00:03:35.751 SYMLINK libspdk_lvol.so 00:03:35.751 SYMLINK libspdk_blobfs.so 00:03:35.751 CC lib/scsi/dev.o 00:03:35.751 CC lib/scsi/lun.o 00:03:35.751 CC lib/scsi/port.o 00:03:35.751 CC lib/scsi/scsi_bdev.o 00:03:35.751 CC lib/scsi/scsi_pr.o 00:03:35.751 CC lib/scsi/scsi.o 00:03:35.751 CC lib/ublk/ublk.o 00:03:35.751 CC lib/nvmf/ctrlr.o 00:03:35.751 CC lib/nbd/nbd.o 00:03:35.751 CC lib/ftl/ftl_core.o 00:03:36.012 CC lib/scsi/scsi_rpc.o 00:03:36.012 CC lib/ftl/ftl_init.o 00:03:36.012 CC lib/ftl/ftl_layout.o 00:03:36.012 CC lib/ublk/ublk_rpc.o 00:03:36.012 CC lib/ftl/ftl_debug.o 00:03:36.012 CC lib/scsi/task.o 00:03:36.012 CC lib/nbd/nbd_rpc.o 00:03:36.274 CC lib/ftl/ftl_io.o 00:03:36.274 CC lib/nvmf/ctrlr_discovery.o 00:03:36.274 CC lib/nvmf/ctrlr_bdev.o 00:03:36.274 CC lib/nvmf/subsystem.o 00:03:36.274 LIB libspdk_scsi.a 00:03:36.274 LIB libspdk_nbd.a 00:03:36.274 CC lib/ftl/ftl_sb.o 00:03:36.274 SO libspdk_nbd.so.7.0 00:03:36.274 CC lib/ftl/ftl_l2p.o 00:03:36.274 SO libspdk_scsi.so.9.0 00:03:36.274 SYMLINK libspdk_nbd.so 00:03:36.274 CC lib/nvmf/nvmf.o 00:03:36.274 SYMLINK libspdk_scsi.so 00:03:36.274 CC lib/nvmf/nvmf_rpc.o 00:03:36.274 CC lib/nvmf/transport.o 00:03:36.534 CC lib/ftl/ftl_l2p_flat.o 00:03:36.534 LIB libspdk_ublk.a 00:03:36.534 CC lib/nvmf/tcp.o 00:03:36.534 SO libspdk_ublk.so.3.0 00:03:36.534 SYMLINK libspdk_ublk.so 00:03:36.795 CC lib/nvmf/stubs.o 00:03:36.795 CC lib/ftl/ftl_nv_cache.o 00:03:36.795 CC lib/iscsi/conn.o 00:03:36.795 CC lib/iscsi/init_grp.o 00:03:37.056 CC lib/iscsi/iscsi.o 00:03:37.056 CC lib/nvmf/mdns_server.o 00:03:37.056 CC lib/nvmf/rdma.o 00:03:37.318 CC lib/iscsi/param.o 00:03:37.318 CC lib/nvmf/auth.o 00:03:37.318 CC lib/iscsi/portal_grp.o 00:03:37.318 CC lib/vhost/vhost.o 00:03:37.318 CC lib/vhost/vhost_rpc.o 00:03:37.646 CC lib/iscsi/tgt_node.o 00:03:37.646 CC lib/vhost/vhost_scsi.o 00:03:37.646 CC lib/iscsi/iscsi_subsystem.o 00:03:37.646 CC lib/ftl/ftl_band.o 00:03:37.908 CC lib/iscsi/iscsi_rpc.o 00:03:37.908 CC lib/iscsi/task.o 00:03:37.908 CC lib/vhost/vhost_blk.o 00:03:37.908 CC lib/vhost/rte_vhost_user.o 00:03:38.170 CC lib/ftl/ftl_band_ops.o 00:03:38.170 CC lib/ftl/ftl_writer.o 00:03:38.170 CC lib/ftl/ftl_rq.o 00:03:38.170 CC lib/ftl/ftl_reloc.o 00:03:38.170 CC lib/ftl/ftl_l2p_cache.o 00:03:38.170 CC lib/ftl/ftl_p2l.o 00:03:38.170 LIB libspdk_iscsi.a 00:03:38.170 CC lib/ftl/ftl_p2l_log.o 00:03:38.170 CC lib/ftl/mngt/ftl_mngt.o 00:03:38.170 SO libspdk_iscsi.so.8.0 00:03:38.170 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:38.432 SYMLINK libspdk_iscsi.so 00:03:38.432 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:38.432 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:38.432 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:38.432 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:38.432 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:38.432 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:38.432 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:38.432 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:38.432 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:38.693 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:38.693 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:38.693 CC lib/ftl/utils/ftl_conf.o 00:03:38.693 CC lib/ftl/utils/ftl_md.o 00:03:38.693 CC lib/ftl/utils/ftl_mempool.o 00:03:38.693 CC lib/ftl/utils/ftl_bitmap.o 00:03:38.693 CC lib/ftl/utils/ftl_property.o 00:03:38.693 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:38.693 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:38.693 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:38.693 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:38.693 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:38.955 LIB libspdk_vhost.a 00:03:38.955 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:38.955 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:38.955 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:38.955 SO libspdk_vhost.so.8.0 00:03:38.955 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:38.955 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:38.955 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:38.955 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:38.955 SYMLINK libspdk_vhost.so 00:03:38.955 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:38.955 CC lib/ftl/base/ftl_base_dev.o 00:03:38.955 CC lib/ftl/base/ftl_base_bdev.o 00:03:38.955 CC lib/ftl/ftl_trace.o 00:03:39.218 LIB libspdk_nvmf.a 00:03:39.218 LIB libspdk_ftl.a 00:03:39.218 SO libspdk_nvmf.so.19.0 00:03:39.480 SO libspdk_ftl.so.9.0 00:03:39.480 SYMLINK libspdk_nvmf.so 00:03:39.742 SYMLINK libspdk_ftl.so 00:03:40.003 CC module/env_dpdk/env_dpdk_rpc.o 00:03:40.003 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:40.003 CC module/scheduler/gscheduler/gscheduler.o 00:03:40.003 CC module/keyring/file/keyring.o 00:03:40.003 CC module/accel/error/accel_error.o 00:03:40.003 CC module/blob/bdev/blob_bdev.o 00:03:40.003 CC module/sock/posix/posix.o 00:03:40.003 CC module/fsdev/aio/fsdev_aio.o 00:03:40.003 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:40.003 CC module/accel/ioat/accel_ioat.o 00:03:40.003 LIB libspdk_env_dpdk_rpc.a 00:03:40.003 SO libspdk_env_dpdk_rpc.so.6.0 00:03:40.003 CC module/keyring/file/keyring_rpc.o 00:03:40.003 CC module/accel/error/accel_error_rpc.o 00:03:40.003 LIB libspdk_scheduler_gscheduler.a 00:03:40.003 SYMLINK libspdk_env_dpdk_rpc.so 00:03:40.003 CC module/accel/ioat/accel_ioat_rpc.o 00:03:40.003 LIB libspdk_scheduler_dpdk_governor.a 00:03:40.003 LIB libspdk_scheduler_dynamic.a 00:03:40.003 SO libspdk_scheduler_gscheduler.so.4.0 00:03:40.003 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:40.263 SO libspdk_scheduler_dynamic.so.4.0 00:03:40.263 LIB libspdk_keyring_file.a 00:03:40.263 SYMLINK libspdk_scheduler_gscheduler.so 00:03:40.263 LIB libspdk_blob_bdev.a 00:03:40.263 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:40.263 SO libspdk_keyring_file.so.2.0 00:03:40.263 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:40.263 LIB libspdk_accel_error.a 00:03:40.263 SO libspdk_blob_bdev.so.11.0 00:03:40.263 SYMLINK libspdk_scheduler_dynamic.so 00:03:40.263 CC module/fsdev/aio/linux_aio_mgr.o 00:03:40.263 SO libspdk_accel_error.so.2.0 00:03:40.263 LIB libspdk_accel_ioat.a 00:03:40.263 SYMLINK libspdk_keyring_file.so 00:03:40.263 SO libspdk_accel_ioat.so.6.0 00:03:40.263 SYMLINK libspdk_blob_bdev.so 00:03:40.263 SYMLINK libspdk_accel_error.so 00:03:40.263 SYMLINK libspdk_accel_ioat.so 00:03:40.263 CC module/accel/iaa/accel_iaa.o 00:03:40.263 CC module/accel/iaa/accel_iaa_rpc.o 00:03:40.263 CC module/accel/dsa/accel_dsa.o 00:03:40.263 CC module/accel/dsa/accel_dsa_rpc.o 00:03:40.263 CC module/keyring/linux/keyring.o 00:03:40.523 CC module/keyring/linux/keyring_rpc.o 00:03:40.523 CC module/bdev/delay/vbdev_delay.o 00:03:40.523 CC module/blobfs/bdev/blobfs_bdev.o 00:03:40.523 LIB libspdk_accel_iaa.a 00:03:40.523 CC module/bdev/error/vbdev_error.o 00:03:40.523 SO libspdk_accel_iaa.so.3.0 00:03:40.523 CC module/bdev/gpt/gpt.o 00:03:40.523 LIB libspdk_keyring_linux.a 00:03:40.523 LIB libspdk_sock_posix.a 00:03:40.523 SYMLINK libspdk_accel_iaa.so 00:03:40.523 LIB libspdk_accel_dsa.a 00:03:40.523 SO libspdk_keyring_linux.so.1.0 00:03:40.523 SO libspdk_sock_posix.so.6.0 00:03:40.523 SO libspdk_accel_dsa.so.5.0 00:03:40.523 LIB libspdk_fsdev_aio.a 00:03:40.523 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:40.783 SYMLINK libspdk_keyring_linux.so 00:03:40.783 SO libspdk_fsdev_aio.so.1.0 00:03:40.783 CC module/bdev/lvol/vbdev_lvol.o 00:03:40.783 SYMLINK libspdk_sock_posix.so 00:03:40.783 SYMLINK libspdk_accel_dsa.so 00:03:40.784 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:40.784 CC module/bdev/error/vbdev_error_rpc.o 00:03:40.784 SYMLINK libspdk_fsdev_aio.so 00:03:40.784 CC module/bdev/malloc/bdev_malloc.o 00:03:40.784 CC module/bdev/gpt/vbdev_gpt.o 00:03:40.784 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:40.784 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:40.784 LIB libspdk_blobfs_bdev.a 00:03:40.784 SO libspdk_blobfs_bdev.so.6.0 00:03:40.784 CC module/bdev/null/bdev_null.o 00:03:40.784 LIB libspdk_bdev_error.a 00:03:40.784 CC module/bdev/nvme/bdev_nvme.o 00:03:40.784 LIB libspdk_bdev_delay.a 00:03:40.784 SO libspdk_bdev_error.so.6.0 00:03:40.784 SYMLINK libspdk_blobfs_bdev.so 00:03:40.784 SO libspdk_bdev_delay.so.6.0 00:03:41.045 SYMLINK libspdk_bdev_error.so 00:03:41.045 SYMLINK libspdk_bdev_delay.so 00:03:41.045 LIB libspdk_bdev_gpt.a 00:03:41.045 CC module/bdev/passthru/vbdev_passthru.o 00:03:41.045 SO libspdk_bdev_gpt.so.6.0 00:03:41.045 CC module/bdev/raid/bdev_raid.o 00:03:41.045 CC module/bdev/null/bdev_null_rpc.o 00:03:41.045 CC module/bdev/split/vbdev_split.o 00:03:41.045 SYMLINK libspdk_bdev_gpt.so 00:03:41.045 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:41.045 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:41.045 LIB libspdk_bdev_malloc.a 00:03:41.045 SO libspdk_bdev_malloc.so.6.0 00:03:41.045 LIB libspdk_bdev_lvol.a 00:03:41.306 SO libspdk_bdev_lvol.so.6.0 00:03:41.306 LIB libspdk_bdev_null.a 00:03:41.306 CC module/bdev/xnvme/bdev_xnvme.o 00:03:41.306 SYMLINK libspdk_bdev_malloc.so 00:03:41.306 SO libspdk_bdev_null.so.6.0 00:03:41.306 SYMLINK libspdk_bdev_lvol.so 00:03:41.306 LIB libspdk_bdev_passthru.a 00:03:41.306 SYMLINK libspdk_bdev_null.so 00:03:41.306 CC module/bdev/split/vbdev_split_rpc.o 00:03:41.306 CC module/bdev/raid/bdev_raid_rpc.o 00:03:41.306 SO libspdk_bdev_passthru.so.6.0 00:03:41.306 CC module/bdev/aio/bdev_aio.o 00:03:41.306 SYMLINK libspdk_bdev_passthru.so 00:03:41.306 CC module/bdev/aio/bdev_aio_rpc.o 00:03:41.306 CC module/bdev/iscsi/bdev_iscsi.o 00:03:41.306 CC module/bdev/ftl/bdev_ftl.o 00:03:41.306 LIB libspdk_bdev_split.a 00:03:41.567 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:41.567 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:41.567 SO libspdk_bdev_split.so.6.0 00:03:41.567 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:41.567 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:41.567 SYMLINK libspdk_bdev_split.so 00:03:41.567 LIB libspdk_bdev_zone_block.a 00:03:41.567 LIB libspdk_bdev_xnvme.a 00:03:41.567 SO libspdk_bdev_zone_block.so.6.0 00:03:41.567 LIB libspdk_bdev_aio.a 00:03:41.567 SO libspdk_bdev_xnvme.so.3.0 00:03:41.567 SO libspdk_bdev_aio.so.6.0 00:03:41.567 SYMLINK libspdk_bdev_xnvme.so 00:03:41.568 SYMLINK libspdk_bdev_zone_block.so 00:03:41.568 CC module/bdev/raid/bdev_raid_sb.o 00:03:41.568 CC module/bdev/raid/raid0.o 00:03:41.568 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:41.568 LIB libspdk_bdev_iscsi.a 00:03:41.568 CC module/bdev/nvme/nvme_rpc.o 00:03:41.568 SYMLINK libspdk_bdev_aio.so 00:03:41.568 CC module/bdev/raid/raid1.o 00:03:41.568 LIB libspdk_bdev_ftl.a 00:03:41.568 SO libspdk_bdev_iscsi.so.6.0 00:03:41.829 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:41.829 SO libspdk_bdev_ftl.so.6.0 00:03:41.829 SYMLINK libspdk_bdev_iscsi.so 00:03:41.829 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:41.829 SYMLINK libspdk_bdev_ftl.so 00:03:41.829 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:41.829 CC module/bdev/raid/concat.o 00:03:41.829 CC module/bdev/nvme/bdev_mdns_client.o 00:03:41.829 CC module/bdev/nvme/vbdev_opal.o 00:03:41.829 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:41.829 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:42.090 LIB libspdk_bdev_raid.a 00:03:42.090 SO libspdk_bdev_raid.so.6.0 00:03:42.351 LIB libspdk_bdev_virtio.a 00:03:42.351 SYMLINK libspdk_bdev_raid.so 00:03:42.351 SO libspdk_bdev_virtio.so.6.0 00:03:42.351 SYMLINK libspdk_bdev_virtio.so 00:03:42.922 LIB libspdk_bdev_nvme.a 00:03:42.922 SO libspdk_bdev_nvme.so.7.0 00:03:42.922 SYMLINK libspdk_bdev_nvme.so 00:03:43.492 CC module/event/subsystems/sock/sock.o 00:03:43.492 CC module/event/subsystems/keyring/keyring.o 00:03:43.492 CC module/event/subsystems/vmd/vmd.o 00:03:43.492 CC module/event/subsystems/iobuf/iobuf.o 00:03:43.492 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:43.492 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:43.492 CC module/event/subsystems/fsdev/fsdev.o 00:03:43.492 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:43.492 CC module/event/subsystems/scheduler/scheduler.o 00:03:43.492 LIB libspdk_event_keyring.a 00:03:43.492 LIB libspdk_event_vhost_blk.a 00:03:43.492 LIB libspdk_event_sock.a 00:03:43.492 LIB libspdk_event_fsdev.a 00:03:43.492 LIB libspdk_event_vmd.a 00:03:43.492 LIB libspdk_event_scheduler.a 00:03:43.492 SO libspdk_event_keyring.so.1.0 00:03:43.492 SO libspdk_event_vhost_blk.so.3.0 00:03:43.492 SO libspdk_event_sock.so.5.0 00:03:43.493 SO libspdk_event_fsdev.so.1.0 00:03:43.493 LIB libspdk_event_iobuf.a 00:03:43.493 SO libspdk_event_scheduler.so.4.0 00:03:43.493 SO libspdk_event_vmd.so.6.0 00:03:43.493 SO libspdk_event_iobuf.so.3.0 00:03:43.493 SYMLINK libspdk_event_keyring.so 00:03:43.493 SYMLINK libspdk_event_vhost_blk.so 00:03:43.493 SYMLINK libspdk_event_fsdev.so 00:03:43.493 SYMLINK libspdk_event_sock.so 00:03:43.493 SYMLINK libspdk_event_scheduler.so 00:03:43.493 SYMLINK libspdk_event_vmd.so 00:03:43.493 SYMLINK libspdk_event_iobuf.so 00:03:43.753 CC module/event/subsystems/accel/accel.o 00:03:44.013 LIB libspdk_event_accel.a 00:03:44.013 SO libspdk_event_accel.so.6.0 00:03:44.013 SYMLINK libspdk_event_accel.so 00:03:44.274 CC module/event/subsystems/bdev/bdev.o 00:03:44.274 LIB libspdk_event_bdev.a 00:03:44.535 SO libspdk_event_bdev.so.6.0 00:03:44.535 SYMLINK libspdk_event_bdev.so 00:03:44.535 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:44.535 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:44.535 CC module/event/subsystems/ublk/ublk.o 00:03:44.535 CC module/event/subsystems/scsi/scsi.o 00:03:44.535 CC module/event/subsystems/nbd/nbd.o 00:03:44.796 LIB libspdk_event_ublk.a 00:03:44.796 SO libspdk_event_ublk.so.3.0 00:03:44.796 LIB libspdk_event_nbd.a 00:03:44.796 LIB libspdk_event_scsi.a 00:03:44.796 SO libspdk_event_nbd.so.6.0 00:03:44.796 SO libspdk_event_scsi.so.6.0 00:03:44.796 SYMLINK libspdk_event_ublk.so 00:03:44.796 LIB libspdk_event_nvmf.a 00:03:44.796 SYMLINK libspdk_event_nbd.so 00:03:44.796 SYMLINK libspdk_event_scsi.so 00:03:44.796 SO libspdk_event_nvmf.so.6.0 00:03:44.796 SYMLINK libspdk_event_nvmf.so 00:03:45.057 CC module/event/subsystems/iscsi/iscsi.o 00:03:45.057 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:45.057 LIB libspdk_event_iscsi.a 00:03:45.057 LIB libspdk_event_vhost_scsi.a 00:03:45.057 SO libspdk_event_iscsi.so.6.0 00:03:45.057 SO libspdk_event_vhost_scsi.so.3.0 00:03:45.318 SYMLINK libspdk_event_vhost_scsi.so 00:03:45.318 SYMLINK libspdk_event_iscsi.so 00:03:45.318 SO libspdk.so.6.0 00:03:45.318 SYMLINK libspdk.so 00:03:45.580 CC app/spdk_nvme_identify/identify.o 00:03:45.580 CC app/trace_record/trace_record.o 00:03:45.580 CXX app/trace/trace.o 00:03:45.580 CC app/spdk_nvme_perf/perf.o 00:03:45.580 CC app/spdk_lspci/spdk_lspci.o 00:03:45.580 CC app/spdk_tgt/spdk_tgt.o 00:03:45.580 CC app/nvmf_tgt/nvmf_main.o 00:03:45.580 CC app/iscsi_tgt/iscsi_tgt.o 00:03:45.580 CC test/thread/poller_perf/poller_perf.o 00:03:45.580 CC examples/util/zipf/zipf.o 00:03:45.580 LINK spdk_lspci 00:03:45.842 LINK spdk_tgt 00:03:45.842 LINK poller_perf 00:03:45.842 LINK nvmf_tgt 00:03:45.842 LINK zipf 00:03:45.842 LINK spdk_trace_record 00:03:45.842 LINK iscsi_tgt 00:03:45.842 LINK spdk_trace 00:03:45.842 CC app/spdk_nvme_discover/discovery_aer.o 00:03:45.842 CC app/spdk_top/spdk_top.o 00:03:45.842 CC app/spdk_dd/spdk_dd.o 00:03:46.155 CC examples/ioat/perf/perf.o 00:03:46.155 CC test/dma/test_dma/test_dma.o 00:03:46.155 LINK spdk_nvme_discover 00:03:46.155 CC app/vhost/vhost.o 00:03:46.155 CC app/fio/nvme/fio_plugin.o 00:03:46.156 LINK spdk_nvme_identify 00:03:46.156 LINK spdk_nvme_perf 00:03:46.156 CC test/app/bdev_svc/bdev_svc.o 00:03:46.156 LINK vhost 00:03:46.156 LINK ioat_perf 00:03:46.156 TEST_HEADER include/spdk/accel.h 00:03:46.156 TEST_HEADER include/spdk/accel_module.h 00:03:46.156 TEST_HEADER include/spdk/assert.h 00:03:46.156 TEST_HEADER include/spdk/barrier.h 00:03:46.156 TEST_HEADER include/spdk/base64.h 00:03:46.156 TEST_HEADER include/spdk/bdev.h 00:03:46.156 TEST_HEADER include/spdk/bdev_module.h 00:03:46.156 TEST_HEADER include/spdk/bdev_zone.h 00:03:46.156 TEST_HEADER include/spdk/bit_array.h 00:03:46.416 TEST_HEADER include/spdk/bit_pool.h 00:03:46.416 TEST_HEADER include/spdk/blob_bdev.h 00:03:46.416 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:46.416 TEST_HEADER include/spdk/blobfs.h 00:03:46.416 TEST_HEADER include/spdk/blob.h 00:03:46.416 TEST_HEADER include/spdk/conf.h 00:03:46.416 TEST_HEADER include/spdk/config.h 00:03:46.416 TEST_HEADER include/spdk/cpuset.h 00:03:46.416 TEST_HEADER include/spdk/crc16.h 00:03:46.416 TEST_HEADER include/spdk/crc32.h 00:03:46.416 TEST_HEADER include/spdk/crc64.h 00:03:46.416 TEST_HEADER include/spdk/dif.h 00:03:46.416 TEST_HEADER include/spdk/dma.h 00:03:46.416 TEST_HEADER include/spdk/endian.h 00:03:46.416 LINK bdev_svc 00:03:46.416 TEST_HEADER include/spdk/env_dpdk.h 00:03:46.416 TEST_HEADER include/spdk/env.h 00:03:46.416 TEST_HEADER include/spdk/event.h 00:03:46.416 TEST_HEADER include/spdk/fd_group.h 00:03:46.416 TEST_HEADER include/spdk/fd.h 00:03:46.416 TEST_HEADER include/spdk/file.h 00:03:46.416 TEST_HEADER include/spdk/fsdev.h 00:03:46.416 TEST_HEADER include/spdk/fsdev_module.h 00:03:46.416 TEST_HEADER include/spdk/ftl.h 00:03:46.416 LINK spdk_dd 00:03:46.416 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:46.416 TEST_HEADER include/spdk/gpt_spec.h 00:03:46.416 TEST_HEADER include/spdk/hexlify.h 00:03:46.416 TEST_HEADER include/spdk/histogram_data.h 00:03:46.416 TEST_HEADER include/spdk/idxd.h 00:03:46.416 TEST_HEADER include/spdk/idxd_spec.h 00:03:46.416 TEST_HEADER include/spdk/init.h 00:03:46.416 TEST_HEADER include/spdk/ioat.h 00:03:46.416 TEST_HEADER include/spdk/ioat_spec.h 00:03:46.416 TEST_HEADER include/spdk/iscsi_spec.h 00:03:46.416 CC app/fio/bdev/fio_plugin.o 00:03:46.416 TEST_HEADER include/spdk/json.h 00:03:46.416 TEST_HEADER include/spdk/jsonrpc.h 00:03:46.416 TEST_HEADER include/spdk/keyring.h 00:03:46.416 TEST_HEADER include/spdk/keyring_module.h 00:03:46.416 TEST_HEADER include/spdk/likely.h 00:03:46.416 TEST_HEADER include/spdk/log.h 00:03:46.416 TEST_HEADER include/spdk/lvol.h 00:03:46.416 TEST_HEADER include/spdk/md5.h 00:03:46.416 TEST_HEADER include/spdk/memory.h 00:03:46.416 TEST_HEADER include/spdk/mmio.h 00:03:46.416 TEST_HEADER include/spdk/nbd.h 00:03:46.416 TEST_HEADER include/spdk/net.h 00:03:46.416 TEST_HEADER include/spdk/notify.h 00:03:46.416 TEST_HEADER include/spdk/nvme.h 00:03:46.416 TEST_HEADER include/spdk/nvme_intel.h 00:03:46.416 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:46.416 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:46.416 TEST_HEADER include/spdk/nvme_spec.h 00:03:46.416 TEST_HEADER include/spdk/nvme_zns.h 00:03:46.416 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:46.416 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:46.416 TEST_HEADER include/spdk/nvmf.h 00:03:46.416 TEST_HEADER include/spdk/nvmf_spec.h 00:03:46.416 TEST_HEADER include/spdk/nvmf_transport.h 00:03:46.416 TEST_HEADER include/spdk/opal.h 00:03:46.416 TEST_HEADER include/spdk/opal_spec.h 00:03:46.416 TEST_HEADER include/spdk/pci_ids.h 00:03:46.416 TEST_HEADER include/spdk/pipe.h 00:03:46.416 TEST_HEADER include/spdk/queue.h 00:03:46.416 TEST_HEADER include/spdk/reduce.h 00:03:46.416 TEST_HEADER include/spdk/rpc.h 00:03:46.416 TEST_HEADER include/spdk/scheduler.h 00:03:46.416 TEST_HEADER include/spdk/scsi.h 00:03:46.416 CC examples/ioat/verify/verify.o 00:03:46.416 TEST_HEADER include/spdk/scsi_spec.h 00:03:46.416 TEST_HEADER include/spdk/sock.h 00:03:46.416 TEST_HEADER include/spdk/stdinc.h 00:03:46.416 TEST_HEADER include/spdk/string.h 00:03:46.416 TEST_HEADER include/spdk/thread.h 00:03:46.416 TEST_HEADER include/spdk/trace.h 00:03:46.416 TEST_HEADER include/spdk/trace_parser.h 00:03:46.416 TEST_HEADER include/spdk/tree.h 00:03:46.416 TEST_HEADER include/spdk/ublk.h 00:03:46.416 TEST_HEADER include/spdk/util.h 00:03:46.416 TEST_HEADER include/spdk/uuid.h 00:03:46.416 TEST_HEADER include/spdk/version.h 00:03:46.416 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:46.416 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:46.416 TEST_HEADER include/spdk/vhost.h 00:03:46.416 TEST_HEADER include/spdk/vmd.h 00:03:46.416 TEST_HEADER include/spdk/xor.h 00:03:46.416 CC test/env/mem_callbacks/mem_callbacks.o 00:03:46.416 TEST_HEADER include/spdk/zipf.h 00:03:46.416 CXX test/cpp_headers/accel.o 00:03:46.416 CXX test/cpp_headers/accel_module.o 00:03:46.416 CC examples/vmd/lsvmd/lsvmd.o 00:03:46.416 LINK test_dma 00:03:46.677 LINK verify 00:03:46.677 CXX test/cpp_headers/assert.o 00:03:46.677 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:46.677 LINK lsvmd 00:03:46.677 LINK spdk_nvme 00:03:46.677 CXX test/cpp_headers/barrier.o 00:03:46.677 CC examples/idxd/perf/perf.o 00:03:46.677 CXX test/cpp_headers/base64.o 00:03:46.677 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:46.677 LINK spdk_bdev 00:03:46.677 CC examples/vmd/led/led.o 00:03:46.677 CXX test/cpp_headers/bdev.o 00:03:46.938 CC examples/thread/thread/thread_ex.o 00:03:46.938 LINK interrupt_tgt 00:03:46.938 LINK spdk_top 00:03:46.938 LINK nvme_fuzz 00:03:46.938 CC examples/sock/hello_world/hello_sock.o 00:03:46.938 LINK led 00:03:46.938 LINK mem_callbacks 00:03:46.938 CXX test/cpp_headers/bdev_module.o 00:03:46.938 LINK idxd_perf 00:03:46.938 CC test/event/event_perf/event_perf.o 00:03:46.938 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:46.938 LINK thread 00:03:46.938 CC test/env/vtophys/vtophys.o 00:03:46.938 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:46.938 CXX test/cpp_headers/bdev_zone.o 00:03:47.199 LINK event_perf 00:03:47.199 CC test/event/reactor/reactor.o 00:03:47.199 LINK hello_sock 00:03:47.199 CC test/event/reactor_perf/reactor_perf.o 00:03:47.199 CC test/nvme/aer/aer.o 00:03:47.199 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:47.199 LINK reactor 00:03:47.199 LINK vtophys 00:03:47.199 CXX test/cpp_headers/bit_array.o 00:03:47.199 LINK reactor_perf 00:03:47.199 CC test/nvme/reset/reset.o 00:03:47.199 CC test/nvme/sgl/sgl.o 00:03:47.199 CXX test/cpp_headers/bit_pool.o 00:03:47.461 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:47.461 CC test/env/memory/memory_ut.o 00:03:47.461 CC examples/accel/perf/accel_perf.o 00:03:47.461 CC test/event/app_repeat/app_repeat.o 00:03:47.461 LINK aer 00:03:47.461 CXX test/cpp_headers/blob_bdev.o 00:03:47.461 LINK env_dpdk_post_init 00:03:47.461 LINK vhost_fuzz 00:03:47.461 LINK reset 00:03:47.461 LINK sgl 00:03:47.461 LINK app_repeat 00:03:47.461 CXX test/cpp_headers/blobfs_bdev.o 00:03:47.721 CC test/app/histogram_perf/histogram_perf.o 00:03:47.721 CC test/app/jsoncat/jsoncat.o 00:03:47.721 CC test/nvme/e2edp/nvme_dp.o 00:03:47.721 CC test/event/scheduler/scheduler.o 00:03:47.721 CXX test/cpp_headers/blobfs.o 00:03:47.721 CC test/env/pci/pci_ut.o 00:03:47.721 CC test/app/stub/stub.o 00:03:47.721 LINK histogram_perf 00:03:47.721 LINK jsoncat 00:03:47.721 CXX test/cpp_headers/blob.o 00:03:47.721 LINK accel_perf 00:03:47.721 LINK scheduler 00:03:47.981 LINK stub 00:03:47.981 CXX test/cpp_headers/conf.o 00:03:47.981 CC test/nvme/overhead/overhead.o 00:03:47.981 LINK nvme_dp 00:03:47.981 LINK pci_ut 00:03:47.981 CXX test/cpp_headers/config.o 00:03:47.981 CC examples/nvme/hello_world/hello_world.o 00:03:47.981 CC examples/blob/hello_world/hello_blob.o 00:03:47.981 CXX test/cpp_headers/cpuset.o 00:03:48.242 CXX test/cpp_headers/crc16.o 00:03:48.242 CC examples/bdev/hello_world/hello_bdev.o 00:03:48.242 CC examples/bdev/bdevperf/bdevperf.o 00:03:48.242 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:48.242 LINK overhead 00:03:48.242 LINK hello_blob 00:03:48.242 LINK hello_world 00:03:48.242 CXX test/cpp_headers/crc32.o 00:03:48.242 CC examples/blob/cli/blobcli.o 00:03:48.242 LINK hello_bdev 00:03:48.242 CXX test/cpp_headers/crc64.o 00:03:48.501 CXX test/cpp_headers/dif.o 00:03:48.501 CC test/nvme/err_injection/err_injection.o 00:03:48.501 LINK memory_ut 00:03:48.501 LINK hello_fsdev 00:03:48.501 CC examples/nvme/reconnect/reconnect.o 00:03:48.501 CXX test/cpp_headers/dma.o 00:03:48.501 LINK iscsi_fuzz 00:03:48.501 CXX test/cpp_headers/endian.o 00:03:48.501 CXX test/cpp_headers/env_dpdk.o 00:03:48.501 CXX test/cpp_headers/env.o 00:03:48.501 CXX test/cpp_headers/event.o 00:03:48.501 LINK err_injection 00:03:48.501 CXX test/cpp_headers/fd_group.o 00:03:48.501 CXX test/cpp_headers/fd.o 00:03:48.760 CXX test/cpp_headers/file.o 00:03:48.760 CXX test/cpp_headers/fsdev.o 00:03:48.760 CC test/rpc_client/rpc_client_test.o 00:03:48.760 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:48.760 CC test/nvme/startup/startup.o 00:03:48.760 LINK reconnect 00:03:48.760 CC examples/nvme/arbitration/arbitration.o 00:03:48.760 LINK blobcli 00:03:48.760 LINK bdevperf 00:03:48.760 CXX test/cpp_headers/fsdev_module.o 00:03:48.760 CC test/accel/dif/dif.o 00:03:48.760 CC examples/nvme/hotplug/hotplug.o 00:03:49.020 LINK rpc_client_test 00:03:49.020 LINK startup 00:03:49.020 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:49.020 CXX test/cpp_headers/ftl.o 00:03:49.020 CC examples/nvme/abort/abort.o 00:03:49.020 CC test/nvme/reserve/reserve.o 00:03:49.020 CXX test/cpp_headers/fuse_dispatcher.o 00:03:49.020 LINK arbitration 00:03:49.020 CXX test/cpp_headers/gpt_spec.o 00:03:49.020 LINK hotplug 00:03:49.020 LINK cmb_copy 00:03:49.280 CXX test/cpp_headers/hexlify.o 00:03:49.280 CXX test/cpp_headers/histogram_data.o 00:03:49.280 LINK reserve 00:03:49.280 CC test/nvme/simple_copy/simple_copy.o 00:03:49.280 LINK nvme_manage 00:03:49.280 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:49.280 CXX test/cpp_headers/idxd.o 00:03:49.280 CXX test/cpp_headers/idxd_spec.o 00:03:49.280 LINK abort 00:03:49.280 CC test/nvme/connect_stress/connect_stress.o 00:03:49.280 CXX test/cpp_headers/init.o 00:03:49.541 LINK pmr_persistence 00:03:49.541 LINK simple_copy 00:03:49.541 CC test/blobfs/mkfs/mkfs.o 00:03:49.541 CXX test/cpp_headers/ioat.o 00:03:49.541 CC test/lvol/esnap/esnap.o 00:03:49.541 CXX test/cpp_headers/ioat_spec.o 00:03:49.541 LINK connect_stress 00:03:49.541 LINK dif 00:03:49.541 CC test/nvme/boot_partition/boot_partition.o 00:03:49.541 CXX test/cpp_headers/iscsi_spec.o 00:03:49.541 CXX test/cpp_headers/json.o 00:03:49.541 LINK mkfs 00:03:49.541 CC test/nvme/compliance/nvme_compliance.o 00:03:49.541 CXX test/cpp_headers/jsonrpc.o 00:03:49.541 CC test/nvme/fused_ordering/fused_ordering.o 00:03:49.801 LINK boot_partition 00:03:49.801 CXX test/cpp_headers/keyring.o 00:03:49.801 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:49.801 CXX test/cpp_headers/keyring_module.o 00:03:49.801 CC examples/nvmf/nvmf/nvmf.o 00:03:49.801 LINK fused_ordering 00:03:49.801 CXX test/cpp_headers/likely.o 00:03:49.801 CC test/nvme/cuse/cuse.o 00:03:49.801 CC test/nvme/fdp/fdp.o 00:03:49.801 CC test/bdev/bdevio/bdevio.o 00:03:49.801 CXX test/cpp_headers/log.o 00:03:49.801 LINK doorbell_aers 00:03:49.801 CXX test/cpp_headers/lvol.o 00:03:50.061 LINK nvme_compliance 00:03:50.061 CXX test/cpp_headers/md5.o 00:03:50.061 CXX test/cpp_headers/memory.o 00:03:50.061 CXX test/cpp_headers/mmio.o 00:03:50.061 LINK nvmf 00:03:50.061 CXX test/cpp_headers/nbd.o 00:03:50.061 CXX test/cpp_headers/net.o 00:03:50.061 CXX test/cpp_headers/notify.o 00:03:50.061 CXX test/cpp_headers/nvme.o 00:03:50.061 CXX test/cpp_headers/nvme_intel.o 00:03:50.319 CXX test/cpp_headers/nvme_ocssd.o 00:03:50.319 LINK fdp 00:03:50.319 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:50.319 CXX test/cpp_headers/nvme_spec.o 00:03:50.319 CXX test/cpp_headers/nvme_zns.o 00:03:50.319 LINK bdevio 00:03:50.319 CXX test/cpp_headers/nvmf_cmd.o 00:03:50.319 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:50.319 CXX test/cpp_headers/nvmf.o 00:03:50.319 CXX test/cpp_headers/nvmf_spec.o 00:03:50.319 CXX test/cpp_headers/nvmf_transport.o 00:03:50.319 CXX test/cpp_headers/opal.o 00:03:50.319 CXX test/cpp_headers/opal_spec.o 00:03:50.319 CXX test/cpp_headers/pci_ids.o 00:03:50.579 CXX test/cpp_headers/pipe.o 00:03:50.579 CXX test/cpp_headers/queue.o 00:03:50.579 CXX test/cpp_headers/reduce.o 00:03:50.579 CXX test/cpp_headers/rpc.o 00:03:50.579 CXX test/cpp_headers/scheduler.o 00:03:50.579 CXX test/cpp_headers/scsi.o 00:03:50.579 CXX test/cpp_headers/scsi_spec.o 00:03:50.579 CXX test/cpp_headers/sock.o 00:03:50.579 CXX test/cpp_headers/stdinc.o 00:03:50.579 CXX test/cpp_headers/string.o 00:03:50.579 CXX test/cpp_headers/thread.o 00:03:50.579 CXX test/cpp_headers/trace.o 00:03:50.839 CXX test/cpp_headers/trace_parser.o 00:03:50.839 CXX test/cpp_headers/tree.o 00:03:50.839 CXX test/cpp_headers/ublk.o 00:03:50.839 CXX test/cpp_headers/util.o 00:03:50.839 CXX test/cpp_headers/uuid.o 00:03:50.839 CXX test/cpp_headers/version.o 00:03:50.839 CXX test/cpp_headers/vfio_user_pci.o 00:03:50.839 CXX test/cpp_headers/vfio_user_spec.o 00:03:50.839 CXX test/cpp_headers/vhost.o 00:03:50.839 CXX test/cpp_headers/vmd.o 00:03:50.839 CXX test/cpp_headers/xor.o 00:03:50.839 CXX test/cpp_headers/zipf.o 00:03:51.099 LINK cuse 00:03:54.394 LINK esnap 00:03:54.394 ************************************ 00:03:54.394 END TEST make 00:03:54.394 ************************************ 00:03:54.394 00:03:54.394 real 1m7.929s 00:03:54.394 user 6m11.186s 00:03:54.394 sys 1m5.770s 00:03:54.394 21:35:13 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:54.394 21:35:13 make -- common/autotest_common.sh@10 -- $ set +x 00:03:54.394 21:35:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:54.394 21:35:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:54.394 21:35:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:54.394 21:35:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.394 21:35:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:54.394 21:35:13 -- pm/common@44 -- $ pid=5054 00:03:54.394 21:35:13 -- pm/common@50 -- $ kill -TERM 5054 00:03:54.394 21:35:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.394 21:35:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:54.394 21:35:13 -- pm/common@44 -- $ pid=5055 00:03:54.394 21:35:13 -- pm/common@50 -- $ kill -TERM 5055 00:03:54.394 21:35:13 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:54.394 21:35:13 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:54.394 21:35:13 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:54.394 21:35:13 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:54.394 21:35:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.394 21:35:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.394 21:35:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.394 21:35:13 -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.394 21:35:13 -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.394 21:35:13 -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.394 21:35:13 -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.394 21:35:13 -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.394 21:35:13 -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.394 21:35:13 -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.394 21:35:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.394 21:35:13 -- scripts/common.sh@344 -- # case "$op" in 00:03:54.394 21:35:13 -- scripts/common.sh@345 -- # : 1 00:03:54.394 21:35:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.394 21:35:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.394 21:35:13 -- scripts/common.sh@365 -- # decimal 1 00:03:54.394 21:35:13 -- scripts/common.sh@353 -- # local d=1 00:03:54.394 21:35:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.394 21:35:13 -- scripts/common.sh@355 -- # echo 1 00:03:54.394 21:35:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.394 21:35:13 -- scripts/common.sh@366 -- # decimal 2 00:03:54.394 21:35:13 -- scripts/common.sh@353 -- # local d=2 00:03:54.394 21:35:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.394 21:35:13 -- scripts/common.sh@355 -- # echo 2 00:03:54.394 21:35:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.394 21:35:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.394 21:35:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.394 21:35:13 -- scripts/common.sh@368 -- # return 0 00:03:54.394 21:35:13 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.394 21:35:13 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:54.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.394 --rc genhtml_branch_coverage=1 00:03:54.394 --rc genhtml_function_coverage=1 00:03:54.394 --rc genhtml_legend=1 00:03:54.394 --rc geninfo_all_blocks=1 00:03:54.394 --rc geninfo_unexecuted_blocks=1 00:03:54.394 00:03:54.394 ' 00:03:54.394 21:35:13 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.395 --rc genhtml_branch_coverage=1 00:03:54.395 --rc genhtml_function_coverage=1 00:03:54.395 --rc genhtml_legend=1 00:03:54.395 --rc geninfo_all_blocks=1 00:03:54.395 --rc geninfo_unexecuted_blocks=1 00:03:54.395 00:03:54.395 ' 00:03:54.395 21:35:13 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.395 --rc genhtml_branch_coverage=1 00:03:54.395 --rc genhtml_function_coverage=1 00:03:54.395 --rc genhtml_legend=1 00:03:54.395 --rc geninfo_all_blocks=1 00:03:54.395 --rc geninfo_unexecuted_blocks=1 00:03:54.395 00:03:54.395 ' 00:03:54.395 21:35:13 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:54.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.395 --rc genhtml_branch_coverage=1 00:03:54.395 --rc genhtml_function_coverage=1 00:03:54.395 --rc genhtml_legend=1 00:03:54.395 --rc geninfo_all_blocks=1 00:03:54.395 --rc geninfo_unexecuted_blocks=1 00:03:54.395 00:03:54.395 ' 00:03:54.395 21:35:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:54.395 21:35:13 -- nvmf/common.sh@7 -- # uname -s 00:03:54.395 21:35:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.395 21:35:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.395 21:35:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.395 21:35:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.395 21:35:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.395 21:35:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.395 21:35:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.395 21:35:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.395 21:35:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.395 21:35:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.395 21:35:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a23feacc-ffd4-4573-b7c3-e3cf82b0b04d 00:03:54.395 21:35:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=a23feacc-ffd4-4573-b7c3-e3cf82b0b04d 00:03:54.395 21:35:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.395 21:35:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.395 21:35:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:54.395 21:35:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:54.395 21:35:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:54.395 21:35:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:54.395 21:35:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.395 21:35:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.395 21:35:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.395 21:35:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.395 21:35:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.395 21:35:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.395 21:35:13 -- paths/export.sh@5 -- # export PATH 00:03:54.395 21:35:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.395 21:35:13 -- nvmf/common.sh@51 -- # : 0 00:03:54.395 21:35:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:54.395 21:35:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:54.395 21:35:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.395 21:35:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.395 21:35:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.395 21:35:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:54.395 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:54.395 21:35:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:54.395 21:35:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:54.395 21:35:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:54.395 21:35:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:54.395 21:35:13 -- spdk/autotest.sh@32 -- # uname -s 00:03:54.395 21:35:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:54.395 21:35:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:54.395 21:35:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:54.700 21:35:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:54.700 21:35:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:54.700 21:35:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:54.700 21:35:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:54.700 21:35:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:54.700 21:35:13 -- spdk/autotest.sh@48 -- # udevadm_pid=54626 00:03:54.700 21:35:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:54.700 21:35:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:54.700 21:35:13 -- pm/common@17 -- # local monitor 00:03:54.700 21:35:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.700 21:35:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.700 21:35:13 -- pm/common@25 -- # sleep 1 00:03:54.700 21:35:13 -- pm/common@21 -- # date +%s 00:03:54.700 21:35:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727645713 00:03:54.700 21:35:13 -- pm/common@21 -- # date +%s 00:03:54.700 21:35:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727645713 00:03:54.700 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727645713_collect-cpu-load.pm.log 00:03:54.700 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727645713_collect-vmstat.pm.log 00:03:55.677 21:35:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:55.677 21:35:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:55.677 21:35:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:55.677 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:03:55.677 21:35:14 -- spdk/autotest.sh@59 -- # create_test_list 00:03:55.677 21:35:14 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:55.677 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:03:55.678 21:35:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:55.678 21:35:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:55.678 21:35:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:55.678 21:35:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:55.678 21:35:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:55.678 21:35:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:55.678 21:35:14 -- common/autotest_common.sh@1455 -- # uname 00:03:55.678 21:35:14 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:55.678 21:35:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:55.678 21:35:14 -- common/autotest_common.sh@1475 -- # uname 00:03:55.678 21:35:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:55.678 21:35:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:55.678 21:35:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:55.678 lcov: LCOV version 1.15 00:03:55.678 21:35:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:10.548 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:10.548 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:22.738 21:35:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:22.738 21:35:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.738 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:04:22.738 21:35:41 -- spdk/autotest.sh@78 -- # rm -f 00:04:22.738 21:35:41 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.561 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:23.561 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:23.561 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:23.561 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:23.561 21:35:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:23.561 21:35:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:23.561 21:35:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:23.561 21:35:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:23.561 21:35:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.561 21:35:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.561 21:35:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1c1n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1648 -- # local device=nvme1c1n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.561 21:35:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.561 21:35:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.561 21:35:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:23.561 21:35:42 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:23.561 21:35:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.561 21:35:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:23.561 21:35:42 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:23.561 21:35:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.561 21:35:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:23.561 21:35:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:23.561 21:35:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.561 21:35:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:23.561 21:35:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.561 21:35:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.561 21:35:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:23.561 21:35:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:23.561 21:35:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:23.819 No valid GPT data, bailing 00:04:23.820 21:35:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.820 21:35:42 -- scripts/common.sh@394 -- # pt= 00:04:23.820 21:35:42 -- scripts/common.sh@395 -- # return 1 00:04:23.820 21:35:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:23.820 1+0 records in 00:04:23.820 1+0 records out 00:04:23.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116868 s, 89.7 MB/s 00:04:23.820 21:35:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.820 21:35:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.820 21:35:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:23.820 21:35:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:23.820 21:35:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:23.820 No valid GPT data, bailing 00:04:23.820 21:35:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:23.820 21:35:42 -- scripts/common.sh@394 -- # pt= 00:04:23.820 21:35:42 -- scripts/common.sh@395 -- # return 1 00:04:23.820 21:35:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:23.820 1+0 records in 00:04:23.820 1+0 records out 00:04:23.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00289274 s, 362 MB/s 00:04:23.820 21:35:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.820 21:35:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.820 21:35:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:23.820 21:35:42 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:23.820 21:35:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:23.820 No valid GPT data, bailing 00:04:23.820 21:35:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:23.820 21:35:42 -- scripts/common.sh@394 -- # pt= 00:04:23.820 21:35:42 -- scripts/common.sh@395 -- # return 1 00:04:23.820 21:35:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:23.820 1+0 records in 00:04:23.820 1+0 records out 00:04:23.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043516 s, 241 MB/s 00:04:23.820 21:35:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.820 21:35:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:23.820 21:35:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:23.820 21:35:42 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:23.820 21:35:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:23.820 No valid GPT data, bailing 00:04:23.820 21:35:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:24.077 21:35:42 -- scripts/common.sh@394 -- # pt= 00:04:24.077 21:35:42 -- scripts/common.sh@395 -- # return 1 00:04:24.077 21:35:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:24.077 1+0 records in 00:04:24.077 1+0 records out 00:04:24.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397677 s, 264 MB/s 00:04:24.077 21:35:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.077 21:35:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:24.077 21:35:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:24.077 21:35:42 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:24.077 21:35:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:24.077 No valid GPT data, bailing 00:04:24.077 21:35:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:24.077 21:35:42 -- scripts/common.sh@394 -- # pt= 00:04:24.077 21:35:42 -- scripts/common.sh@395 -- # return 1 00:04:24.077 21:35:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:24.077 1+0 records in 00:04:24.077 1+0 records out 00:04:24.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00383244 s, 274 MB/s 00:04:24.077 21:35:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.077 21:35:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:24.077 21:35:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:24.077 21:35:42 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:24.077 21:35:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:24.077 No valid GPT data, bailing 00:04:24.077 21:35:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:24.077 21:35:42 -- scripts/common.sh@394 -- # pt= 00:04:24.077 21:35:42 -- scripts/common.sh@395 -- # return 1 00:04:24.077 21:35:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:24.077 1+0 records in 00:04:24.077 1+0 records out 00:04:24.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00374758 s, 280 MB/s 00:04:24.077 21:35:42 -- spdk/autotest.sh@105 -- # sync 00:04:24.335 21:35:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:24.335 21:35:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:24.335 21:35:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:25.711 21:35:44 -- spdk/autotest.sh@111 -- # uname -s 00:04:25.711 21:35:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:25.711 21:35:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:25.711 21:35:44 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:26.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.536 Hugepages 00:04:26.536 node hugesize free / total 00:04:26.536 node0 1048576kB 0 / 0 00:04:26.536 node0 2048kB 0 / 0 00:04:26.536 00:04:26.536 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:26.536 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:26.536 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:26.794 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:26.794 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:26.794 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:26.794 21:35:45 -- spdk/autotest.sh@117 -- # uname -s 00:04:26.794 21:35:45 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:26.794 21:35:45 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:26.794 21:35:45 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.624 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.882 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.882 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.882 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.882 21:35:46 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:28.817 21:35:47 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:28.817 21:35:47 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:28.817 21:35:47 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:28.817 21:35:47 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:28.817 21:35:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:28.817 21:35:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:28.817 21:35:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.817 21:35:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.817 21:35:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:28.817 21:35:47 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:28.817 21:35:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:29.075 21:35:47 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.333 Waiting for block devices as requested 00:04:29.333 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.591 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.591 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.591 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.860 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:34.861 21:35:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:34.861 21:35:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:34.861 21:35:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:34.861 21:35:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:34.861 21:35:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:34.861 21:35:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:34.861 21:35:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:34.861 21:35:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:34.861 21:35:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1541 -- # continue 00:04:34.861 21:35:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:34.861 21:35:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:34.861 21:35:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:34.861 21:35:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:34.861 21:35:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1541 -- # continue 00:04:34.861 21:35:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:34.861 21:35:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:34.861 21:35:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:34.861 21:35:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:34.861 21:35:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1541 -- # continue 00:04:34.861 21:35:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:34.861 21:35:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:34.861 21:35:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:34.861 21:35:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:34.861 21:35:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:34.861 21:35:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:34.861 21:35:53 -- common/autotest_common.sh@1541 -- # continue 00:04:34.861 21:35:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:34.861 21:35:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.861 21:35:53 -- common/autotest_common.sh@10 -- # set +x 00:04:34.861 21:35:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:34.861 21:35:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.861 21:35:53 -- common/autotest_common.sh@10 -- # set +x 00:04:34.861 21:35:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.687 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.687 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.687 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.687 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:35.945 21:35:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:35.945 21:35:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:35.945 21:35:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.945 21:35:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:35.945 21:35:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:35.945 21:35:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:35.945 21:35:54 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:35.945 21:35:54 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:35.945 21:35:54 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:35.945 21:35:54 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:35.945 21:35:54 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:35.945 21:35:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:35.945 21:35:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:35.945 21:35:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.945 21:35:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:35.945 21:35:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:35.945 21:35:54 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:35.945 21:35:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:35.945 21:35:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:35.946 21:35:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:35.946 21:35:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:35.946 21:35:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.946 21:35:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:35.946 21:35:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:35.946 21:35:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:35.946 21:35:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.946 21:35:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:35.946 21:35:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:35.946 21:35:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:35.946 21:35:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.946 21:35:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:35.946 21:35:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:35.946 21:35:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:35.946 21:35:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:35.946 21:35:54 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:35.946 21:35:54 -- common/autotest_common.sh@1570 -- # return 0 00:04:35.946 21:35:54 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:35.946 21:35:54 -- common/autotest_common.sh@1578 -- # return 0 00:04:35.946 21:35:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:35.946 21:35:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:35.946 21:35:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:35.946 21:35:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:35.946 21:35:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:35.946 21:35:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.946 21:35:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.946 21:35:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:35.946 21:35:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:35.946 21:35:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.946 21:35:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.946 21:35:54 -- common/autotest_common.sh@10 -- # set +x 00:04:35.946 ************************************ 00:04:35.946 START TEST env 00:04:35.946 ************************************ 00:04:35.946 21:35:54 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:35.946 * Looking for test storage... 00:04:35.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:35.946 21:35:54 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:35.946 21:35:54 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:35.946 21:35:54 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:36.204 21:35:54 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:36.204 21:35:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.204 21:35:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.204 21:35:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.204 21:35:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.204 21:35:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.204 21:35:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.204 21:35:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.204 21:35:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.204 21:35:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.204 21:35:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.204 21:35:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.204 21:35:54 env -- scripts/common.sh@344 -- # case "$op" in 00:04:36.204 21:35:54 env -- scripts/common.sh@345 -- # : 1 00:04:36.204 21:35:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.204 21:35:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.204 21:35:54 env -- scripts/common.sh@365 -- # decimal 1 00:04:36.204 21:35:54 env -- scripts/common.sh@353 -- # local d=1 00:04:36.204 21:35:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.204 21:35:54 env -- scripts/common.sh@355 -- # echo 1 00:04:36.204 21:35:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.204 21:35:54 env -- scripts/common.sh@366 -- # decimal 2 00:04:36.204 21:35:54 env -- scripts/common.sh@353 -- # local d=2 00:04:36.204 21:35:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.204 21:35:54 env -- scripts/common.sh@355 -- # echo 2 00:04:36.205 21:35:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.205 21:35:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.205 21:35:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.205 21:35:54 env -- scripts/common.sh@368 -- # return 0 00:04:36.205 21:35:54 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.205 21:35:54 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.205 --rc genhtml_branch_coverage=1 00:04:36.205 --rc genhtml_function_coverage=1 00:04:36.205 --rc genhtml_legend=1 00:04:36.205 --rc geninfo_all_blocks=1 00:04:36.205 --rc geninfo_unexecuted_blocks=1 00:04:36.205 00:04:36.205 ' 00:04:36.205 21:35:54 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.205 --rc genhtml_branch_coverage=1 00:04:36.205 --rc genhtml_function_coverage=1 00:04:36.205 --rc genhtml_legend=1 00:04:36.205 --rc geninfo_all_blocks=1 00:04:36.205 --rc geninfo_unexecuted_blocks=1 00:04:36.205 00:04:36.205 ' 00:04:36.205 21:35:54 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.205 --rc genhtml_branch_coverage=1 00:04:36.205 --rc genhtml_function_coverage=1 00:04:36.205 --rc genhtml_legend=1 00:04:36.205 --rc geninfo_all_blocks=1 00:04:36.205 --rc geninfo_unexecuted_blocks=1 00:04:36.205 00:04:36.205 ' 00:04:36.205 21:35:54 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:36.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.205 --rc genhtml_branch_coverage=1 00:04:36.205 --rc genhtml_function_coverage=1 00:04:36.205 --rc genhtml_legend=1 00:04:36.205 --rc geninfo_all_blocks=1 00:04:36.205 --rc geninfo_unexecuted_blocks=1 00:04:36.205 00:04:36.205 ' 00:04:36.205 21:35:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:36.205 21:35:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.205 21:35:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.205 21:35:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.205 ************************************ 00:04:36.205 START TEST env_memory 00:04:36.205 ************************************ 00:04:36.205 21:35:54 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:36.205 00:04:36.205 00:04:36.205 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.205 http://cunit.sourceforge.net/ 00:04:36.205 00:04:36.205 00:04:36.205 Suite: memory 00:04:36.205 Test: alloc and free memory map ...[2024-09-29 21:35:55.026165] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.205 passed 00:04:36.205 Test: mem map translation ...[2024-09-29 21:35:55.064817] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.205 [2024-09-29 21:35:55.064862] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.205 [2024-09-29 21:35:55.064920] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.205 [2024-09-29 21:35:55.064934] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:36.205 passed 00:04:36.205 Test: mem map registration ...[2024-09-29 21:35:55.132921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:36.205 [2024-09-29 21:35:55.132966] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:36.205 passed 00:04:36.464 Test: mem map adjacent registrations ...passed 00:04:36.464 00:04:36.464 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.464 suites 1 1 n/a 0 0 00:04:36.464 tests 4 4 4 0 0 00:04:36.464 asserts 152 152 152 0 n/a 00:04:36.464 00:04:36.464 Elapsed time = 0.233 seconds 00:04:36.464 00:04:36.464 real 0m0.265s 00:04:36.464 user 0m0.244s 00:04:36.464 sys 0m0.015s 00:04:36.464 21:35:55 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.464 21:35:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:36.464 ************************************ 00:04:36.464 END TEST env_memory 00:04:36.464 ************************************ 00:04:36.464 21:35:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.464 21:35:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.464 21:35:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.464 21:35:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.464 ************************************ 00:04:36.464 START TEST env_vtophys 00:04:36.464 ************************************ 00:04:36.464 21:35:55 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.464 EAL: lib.eal log level changed from notice to debug 00:04:36.464 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.464 EAL: Detected lcore 1 as core 0 on socket 0 00:04:36.464 EAL: Detected lcore 2 as core 0 on socket 0 00:04:36.464 EAL: Detected lcore 3 as core 0 on socket 0 00:04:36.464 EAL: Detected lcore 4 as core 0 on socket 0 00:04:36.464 EAL: Detected lcore 5 as core 0 on socket 0 00:04:36.464 EAL: Detected lcore 6 as core 0 on socket 0 00:04:36.464 EAL: Detected lcore 7 as core 0 on socket 0 00:04:36.464 EAL: Detected lcore 8 as core 0 on socket 0 00:04:36.464 EAL: Detected lcore 9 as core 0 on socket 0 00:04:36.464 EAL: Maximum logical cores by configuration: 128 00:04:36.464 EAL: Detected CPU lcores: 10 00:04:36.464 EAL: Detected NUMA nodes: 1 00:04:36.464 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:36.464 EAL: Detected shared linkage of DPDK 00:04:36.464 EAL: No shared files mode enabled, IPC will be disabled 00:04:36.464 EAL: Selected IOVA mode 'PA' 00:04:36.464 EAL: Probing VFIO support... 00:04:36.464 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:36.464 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:36.464 EAL: Ask a virtual area of 0x2e000 bytes 00:04:36.464 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:36.464 EAL: Setting up physically contiguous memory... 00:04:36.464 EAL: Setting maximum number of open files to 524288 00:04:36.464 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:36.464 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:36.464 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.464 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:36.464 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.464 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.464 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:36.464 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:36.464 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.464 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:36.464 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.464 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.464 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:36.464 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:36.464 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.464 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:36.464 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.464 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.464 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:36.464 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:36.464 EAL: Ask a virtual area of 0x61000 bytes 00:04:36.464 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:36.464 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:36.464 EAL: Ask a virtual area of 0x400000000 bytes 00:04:36.464 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:36.464 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:36.464 EAL: Hugepages will be freed exactly as allocated. 00:04:36.464 EAL: No shared files mode enabled, IPC is disabled 00:04:36.464 EAL: No shared files mode enabled, IPC is disabled 00:04:36.722 EAL: TSC frequency is ~2600000 KHz 00:04:36.722 EAL: Main lcore 0 is ready (tid=7f66838dca40;cpuset=[0]) 00:04:36.722 EAL: Trying to obtain current memory policy. 00:04:36.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.722 EAL: Restoring previous memory policy: 0 00:04:36.722 EAL: request: mp_malloc_sync 00:04:36.722 EAL: No shared files mode enabled, IPC is disabled 00:04:36.722 EAL: Heap on socket 0 was expanded by 2MB 00:04:36.722 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:36.722 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:36.722 EAL: Mem event callback 'spdk:(nil)' registered 00:04:36.722 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:36.722 00:04:36.722 00:04:36.722 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.722 http://cunit.sourceforge.net/ 00:04:36.722 00:04:36.722 00:04:36.722 Suite: components_suite 00:04:36.980 Test: vtophys_malloc_test ...passed 00:04:36.980 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:36.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.980 EAL: Restoring previous memory policy: 4 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was expanded by 4MB 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was shrunk by 4MB 00:04:36.980 EAL: Trying to obtain current memory policy. 00:04:36.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.980 EAL: Restoring previous memory policy: 4 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was expanded by 6MB 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was shrunk by 6MB 00:04:36.980 EAL: Trying to obtain current memory policy. 00:04:36.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.980 EAL: Restoring previous memory policy: 4 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was expanded by 10MB 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was shrunk by 10MB 00:04:36.980 EAL: Trying to obtain current memory policy. 00:04:36.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.980 EAL: Restoring previous memory policy: 4 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was expanded by 18MB 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was shrunk by 18MB 00:04:36.980 EAL: Trying to obtain current memory policy. 00:04:36.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.980 EAL: Restoring previous memory policy: 4 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was expanded by 34MB 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was shrunk by 34MB 00:04:36.980 EAL: Trying to obtain current memory policy. 00:04:36.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.980 EAL: Restoring previous memory policy: 4 00:04:36.980 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.980 EAL: request: mp_malloc_sync 00:04:36.980 EAL: No shared files mode enabled, IPC is disabled 00:04:36.980 EAL: Heap on socket 0 was expanded by 66MB 00:04:37.241 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.241 EAL: request: mp_malloc_sync 00:04:37.241 EAL: No shared files mode enabled, IPC is disabled 00:04:37.241 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.241 EAL: Trying to obtain current memory policy. 00:04:37.241 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.241 EAL: Restoring previous memory policy: 4 00:04:37.241 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.241 EAL: request: mp_malloc_sync 00:04:37.241 EAL: No shared files mode enabled, IPC is disabled 00:04:37.241 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.503 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.503 EAL: request: mp_malloc_sync 00:04:37.503 EAL: No shared files mode enabled, IPC is disabled 00:04:37.503 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.503 EAL: Trying to obtain current memory policy. 00:04:37.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.503 EAL: Restoring previous memory policy: 4 00:04:37.503 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.503 EAL: request: mp_malloc_sync 00:04:37.503 EAL: No shared files mode enabled, IPC is disabled 00:04:37.503 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.762 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.762 EAL: request: mp_malloc_sync 00:04:37.762 EAL: No shared files mode enabled, IPC is disabled 00:04:37.762 EAL: Heap on socket 0 was shrunk by 258MB 00:04:38.021 EAL: Trying to obtain current memory policy. 00:04:38.021 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.278 EAL: Restoring previous memory policy: 4 00:04:38.278 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.278 EAL: request: mp_malloc_sync 00:04:38.278 EAL: No shared files mode enabled, IPC is disabled 00:04:38.278 EAL: Heap on socket 0 was expanded by 514MB 00:04:38.844 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.844 EAL: request: mp_malloc_sync 00:04:38.844 EAL: No shared files mode enabled, IPC is disabled 00:04:38.844 EAL: Heap on socket 0 was shrunk by 514MB 00:04:39.409 EAL: Trying to obtain current memory policy. 00:04:39.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.409 EAL: Restoring previous memory policy: 4 00:04:39.409 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.409 EAL: request: mp_malloc_sync 00:04:39.409 EAL: No shared files mode enabled, IPC is disabled 00:04:39.409 EAL: Heap on socket 0 was expanded by 1026MB 00:04:40.867 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.867 EAL: request: mp_malloc_sync 00:04:40.867 EAL: No shared files mode enabled, IPC is disabled 00:04:40.867 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:41.439 passed 00:04:41.439 00:04:41.439 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.439 suites 1 1 n/a 0 0 00:04:41.439 tests 2 2 2 0 0 00:04:41.439 asserts 5810 5810 5810 0 n/a 00:04:41.439 00:04:41.439 Elapsed time = 4.825 seconds 00:04:41.439 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.439 EAL: request: mp_malloc_sync 00:04:41.439 EAL: No shared files mode enabled, IPC is disabled 00:04:41.439 EAL: Heap on socket 0 was shrunk by 2MB 00:04:41.439 EAL: No shared files mode enabled, IPC is disabled 00:04:41.439 EAL: No shared files mode enabled, IPC is disabled 00:04:41.439 EAL: No shared files mode enabled, IPC is disabled 00:04:41.439 00:04:41.439 real 0m5.084s 00:04:41.439 user 0m4.299s 00:04:41.439 sys 0m0.637s 00:04:41.439 ************************************ 00:04:41.439 21:36:00 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.439 21:36:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:41.439 END TEST env_vtophys 00:04:41.439 ************************************ 00:04:41.700 21:36:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.700 21:36:00 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.700 21:36:00 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.700 21:36:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.700 ************************************ 00:04:41.700 START TEST env_pci 00:04:41.700 ************************************ 00:04:41.700 21:36:00 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.700 00:04:41.700 00:04:41.700 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.700 http://cunit.sourceforge.net/ 00:04:41.700 00:04:41.700 00:04:41.700 Suite: pci 00:04:41.700 Test: pci_hook ...[2024-09-29 21:36:00.469898] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57371 has claimed it 00:04:41.700 passed 00:04:41.700 00:04:41.700 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.700 suites 1 1 n/a 0 0 00:04:41.700 tests 1 1 1 0 0 00:04:41.700 asserts 25 25 25 0 n/a 00:04:41.700 00:04:41.700 Elapsed time = 0.004 seconds 00:04:41.700 EAL: Cannot find device (10000:00:01.0) 00:04:41.700 EAL: Failed to attach device on primary process 00:04:41.700 00:04:41.700 real 0m0.058s 00:04:41.700 user 0m0.027s 00:04:41.700 sys 0m0.031s 00:04:41.700 21:36:00 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.700 ************************************ 00:04:41.700 END TEST env_pci 00:04:41.700 ************************************ 00:04:41.700 21:36:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:41.700 21:36:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:41.700 21:36:00 env -- env/env.sh@15 -- # uname 00:04:41.700 21:36:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:41.700 21:36:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:41.700 21:36:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.700 21:36:00 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:41.700 21:36:00 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.700 21:36:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.700 ************************************ 00:04:41.700 START TEST env_dpdk_post_init 00:04:41.700 ************************************ 00:04:41.700 21:36:00 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.700 EAL: Detected CPU lcores: 10 00:04:41.700 EAL: Detected NUMA nodes: 1 00:04:41.700 EAL: Detected shared linkage of DPDK 00:04:41.701 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.701 EAL: Selected IOVA mode 'PA' 00:04:41.962 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.962 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:41.962 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:41.962 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:41.962 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:41.962 Starting DPDK initialization... 00:04:41.962 Starting SPDK post initialization... 00:04:41.962 SPDK NVMe probe 00:04:41.962 Attaching to 0000:00:10.0 00:04:41.962 Attaching to 0000:00:11.0 00:04:41.962 Attaching to 0000:00:12.0 00:04:41.962 Attaching to 0000:00:13.0 00:04:41.962 Attached to 0000:00:13.0 00:04:41.962 Attached to 0000:00:10.0 00:04:41.962 Attached to 0000:00:11.0 00:04:41.962 Attached to 0000:00:12.0 00:04:41.962 Cleaning up... 00:04:41.962 00:04:41.962 real 0m0.246s 00:04:41.962 user 0m0.077s 00:04:41.962 sys 0m0.071s 00:04:41.962 21:36:00 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.962 21:36:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.962 ************************************ 00:04:41.962 END TEST env_dpdk_post_init 00:04:41.962 ************************************ 00:04:41.962 21:36:00 env -- env/env.sh@26 -- # uname 00:04:41.962 21:36:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.962 21:36:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.962 21:36:00 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.962 21:36:00 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.962 21:36:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.963 ************************************ 00:04:41.963 START TEST env_mem_callbacks 00:04:41.963 ************************************ 00:04:41.963 21:36:00 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.963 EAL: Detected CPU lcores: 10 00:04:41.963 EAL: Detected NUMA nodes: 1 00:04:41.963 EAL: Detected shared linkage of DPDK 00:04:41.963 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.963 EAL: Selected IOVA mode 'PA' 00:04:42.223 00:04:42.223 00:04:42.223 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.223 http://cunit.sourceforge.net/ 00:04:42.223 00:04:42.223 00:04:42.223 Suite: memory 00:04:42.223 Test: test ... 00:04:42.223 register 0x200000200000 2097152 00:04:42.223 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.223 malloc 3145728 00:04:42.223 register 0x200000400000 4194304 00:04:42.223 buf 0x2000004fffc0 len 3145728 PASSED 00:04:42.223 malloc 64 00:04:42.223 buf 0x2000004ffec0 len 64 PASSED 00:04:42.223 malloc 4194304 00:04:42.223 register 0x200000800000 6291456 00:04:42.223 buf 0x2000009fffc0 len 4194304 PASSED 00:04:42.223 free 0x2000004fffc0 3145728 00:04:42.223 free 0x2000004ffec0 64 00:04:42.223 unregister 0x200000400000 4194304 PASSED 00:04:42.223 free 0x2000009fffc0 4194304 00:04:42.223 unregister 0x200000800000 6291456 PASSED 00:04:42.223 malloc 8388608 00:04:42.223 register 0x200000400000 10485760 00:04:42.223 buf 0x2000005fffc0 len 8388608 PASSED 00:04:42.223 free 0x2000005fffc0 8388608 00:04:42.223 unregister 0x200000400000 10485760 PASSED 00:04:42.223 passed 00:04:42.223 00:04:42.223 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.223 suites 1 1 n/a 0 0 00:04:42.223 tests 1 1 1 0 0 00:04:42.223 asserts 15 15 15 0 n/a 00:04:42.223 00:04:42.223 Elapsed time = 0.048 seconds 00:04:42.223 00:04:42.223 real 0m0.226s 00:04:42.223 user 0m0.070s 00:04:42.223 sys 0m0.053s 00:04:42.223 21:36:01 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.223 21:36:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.223 ************************************ 00:04:42.223 END TEST env_mem_callbacks 00:04:42.223 ************************************ 00:04:42.223 00:04:42.223 real 0m6.329s 00:04:42.223 user 0m4.865s 00:04:42.223 sys 0m1.027s 00:04:42.223 21:36:01 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.223 ************************************ 00:04:42.223 END TEST env 00:04:42.223 ************************************ 00:04:42.223 21:36:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.223 21:36:01 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:42.223 21:36:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.223 21:36:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.223 21:36:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.484 ************************************ 00:04:42.484 START TEST rpc 00:04:42.484 ************************************ 00:04:42.484 21:36:01 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:42.484 * Looking for test storage... 00:04:42.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.484 21:36:01 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:42.484 21:36:01 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:42.484 21:36:01 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:42.484 21:36:01 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:42.484 21:36:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.484 21:36:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.484 21:36:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.484 21:36:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.484 21:36:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.484 21:36:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.484 21:36:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.484 21:36:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.484 21:36:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.484 21:36:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.484 21:36:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.484 21:36:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:42.484 21:36:01 rpc -- scripts/common.sh@345 -- # : 1 00:04:42.484 21:36:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.484 21:36:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.484 21:36:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:42.484 21:36:01 rpc -- scripts/common.sh@353 -- # local d=1 00:04:42.484 21:36:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.484 21:36:01 rpc -- scripts/common.sh@355 -- # echo 1 00:04:42.484 21:36:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.484 21:36:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:42.485 21:36:01 rpc -- scripts/common.sh@353 -- # local d=2 00:04:42.485 21:36:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.485 21:36:01 rpc -- scripts/common.sh@355 -- # echo 2 00:04:42.485 21:36:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.485 21:36:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.485 21:36:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.485 21:36:01 rpc -- scripts/common.sh@368 -- # return 0 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 21:36:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57492 00:04:42.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.485 21:36:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.485 21:36:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57492 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@831 -- # '[' -z 57492 ']' 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.485 21:36:01 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.485 21:36:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.485 [2024-09-29 21:36:01.448593] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:42.485 [2024-09-29 21:36:01.448785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57492 ] 00:04:42.745 [2024-09-29 21:36:01.603563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.003 [2024-09-29 21:36:01.845400] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:43.003 [2024-09-29 21:36:01.845460] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57492' to capture a snapshot of events at runtime. 00:04:43.003 [2024-09-29 21:36:01.845470] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:43.003 [2024-09-29 21:36:01.845481] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:43.003 [2024-09-29 21:36:01.845489] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57492 for offline analysis/debug. 00:04:43.003 [2024-09-29 21:36:01.845525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.572 21:36:02 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.572 21:36:02 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:43.572 21:36:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.572 21:36:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.572 21:36:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.572 21:36:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.572 21:36:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.572 21:36:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.572 21:36:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.572 ************************************ 00:04:43.572 START TEST rpc_integrity 00:04:43.572 ************************************ 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:43.572 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.572 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.572 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.572 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.572 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.572 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.572 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.572 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.572 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.572 { 00:04:43.572 "name": "Malloc0", 00:04:43.572 "aliases": [ 00:04:43.572 "e97ca031-411a-4c36-89bd-62f08f4f9529" 00:04:43.572 ], 00:04:43.572 "product_name": "Malloc disk", 00:04:43.572 "block_size": 512, 00:04:43.572 "num_blocks": 16384, 00:04:43.572 "uuid": "e97ca031-411a-4c36-89bd-62f08f4f9529", 00:04:43.572 "assigned_rate_limits": { 00:04:43.572 "rw_ios_per_sec": 0, 00:04:43.572 "rw_mbytes_per_sec": 0, 00:04:43.572 "r_mbytes_per_sec": 0, 00:04:43.572 "w_mbytes_per_sec": 0 00:04:43.572 }, 00:04:43.572 "claimed": false, 00:04:43.572 "zoned": false, 00:04:43.572 "supported_io_types": { 00:04:43.572 "read": true, 00:04:43.572 "write": true, 00:04:43.572 "unmap": true, 00:04:43.572 "flush": true, 00:04:43.572 "reset": true, 00:04:43.572 "nvme_admin": false, 00:04:43.572 "nvme_io": false, 00:04:43.572 "nvme_io_md": false, 00:04:43.572 "write_zeroes": true, 00:04:43.572 "zcopy": true, 00:04:43.572 "get_zone_info": false, 00:04:43.572 "zone_management": false, 00:04:43.572 "zone_append": false, 00:04:43.572 "compare": false, 00:04:43.572 "compare_and_write": false, 00:04:43.572 "abort": true, 00:04:43.572 "seek_hole": false, 00:04:43.572 "seek_data": false, 00:04:43.572 "copy": true, 00:04:43.572 "nvme_iov_md": false 00:04:43.572 }, 00:04:43.572 "memory_domains": [ 00:04:43.572 { 00:04:43.572 "dma_device_id": "system", 00:04:43.572 "dma_device_type": 1 00:04:43.572 }, 00:04:43.572 { 00:04:43.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.572 "dma_device_type": 2 00:04:43.572 } 00:04:43.572 ], 00:04:43.572 "driver_specific": {} 00:04:43.572 } 00:04:43.572 ]' 00:04:43.572 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.831 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.831 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.831 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.831 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.831 [2024-09-29 21:36:02.577746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.831 [2024-09-29 21:36:02.577803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.831 [2024-09-29 21:36:02.577825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:43.831 [2024-09-29 21:36:02.577836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.831 [2024-09-29 21:36:02.580053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.831 [2024-09-29 21:36:02.580094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.831 Passthru0 00:04:43.831 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.831 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.831 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.831 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.831 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.831 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.831 { 00:04:43.831 "name": "Malloc0", 00:04:43.831 "aliases": [ 00:04:43.831 "e97ca031-411a-4c36-89bd-62f08f4f9529" 00:04:43.831 ], 00:04:43.831 "product_name": "Malloc disk", 00:04:43.831 "block_size": 512, 00:04:43.831 "num_blocks": 16384, 00:04:43.831 "uuid": "e97ca031-411a-4c36-89bd-62f08f4f9529", 00:04:43.831 "assigned_rate_limits": { 00:04:43.831 "rw_ios_per_sec": 0, 00:04:43.831 "rw_mbytes_per_sec": 0, 00:04:43.831 "r_mbytes_per_sec": 0, 00:04:43.831 "w_mbytes_per_sec": 0 00:04:43.831 }, 00:04:43.831 "claimed": true, 00:04:43.831 "claim_type": "exclusive_write", 00:04:43.831 "zoned": false, 00:04:43.831 "supported_io_types": { 00:04:43.831 "read": true, 00:04:43.831 "write": true, 00:04:43.831 "unmap": true, 00:04:43.831 "flush": true, 00:04:43.831 "reset": true, 00:04:43.831 "nvme_admin": false, 00:04:43.831 "nvme_io": false, 00:04:43.831 "nvme_io_md": false, 00:04:43.831 "write_zeroes": true, 00:04:43.831 "zcopy": true, 00:04:43.831 "get_zone_info": false, 00:04:43.831 "zone_management": false, 00:04:43.831 "zone_append": false, 00:04:43.831 "compare": false, 00:04:43.831 "compare_and_write": false, 00:04:43.831 "abort": true, 00:04:43.831 "seek_hole": false, 00:04:43.831 "seek_data": false, 00:04:43.831 "copy": true, 00:04:43.831 "nvme_iov_md": false 00:04:43.831 }, 00:04:43.831 "memory_domains": [ 00:04:43.831 { 00:04:43.831 "dma_device_id": "system", 00:04:43.831 "dma_device_type": 1 00:04:43.831 }, 00:04:43.831 { 00:04:43.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.831 "dma_device_type": 2 00:04:43.831 } 00:04:43.831 ], 00:04:43.831 "driver_specific": {} 00:04:43.831 }, 00:04:43.831 { 00:04:43.831 "name": "Passthru0", 00:04:43.831 "aliases": [ 00:04:43.831 "c55ba820-7cb4-51f4-9243-10ccaa0dd6f2" 00:04:43.831 ], 00:04:43.831 "product_name": "passthru", 00:04:43.831 "block_size": 512, 00:04:43.831 "num_blocks": 16384, 00:04:43.831 "uuid": "c55ba820-7cb4-51f4-9243-10ccaa0dd6f2", 00:04:43.831 "assigned_rate_limits": { 00:04:43.831 "rw_ios_per_sec": 0, 00:04:43.831 "rw_mbytes_per_sec": 0, 00:04:43.831 "r_mbytes_per_sec": 0, 00:04:43.831 "w_mbytes_per_sec": 0 00:04:43.831 }, 00:04:43.831 "claimed": false, 00:04:43.831 "zoned": false, 00:04:43.831 "supported_io_types": { 00:04:43.831 "read": true, 00:04:43.831 "write": true, 00:04:43.831 "unmap": true, 00:04:43.831 "flush": true, 00:04:43.831 "reset": true, 00:04:43.831 "nvme_admin": false, 00:04:43.831 "nvme_io": false, 00:04:43.831 "nvme_io_md": false, 00:04:43.831 "write_zeroes": true, 00:04:43.831 "zcopy": true, 00:04:43.831 "get_zone_info": false, 00:04:43.831 "zone_management": false, 00:04:43.831 "zone_append": false, 00:04:43.831 "compare": false, 00:04:43.831 "compare_and_write": false, 00:04:43.831 "abort": true, 00:04:43.831 "seek_hole": false, 00:04:43.831 "seek_data": false, 00:04:43.831 "copy": true, 00:04:43.832 "nvme_iov_md": false 00:04:43.832 }, 00:04:43.832 "memory_domains": [ 00:04:43.832 { 00:04:43.832 "dma_device_id": "system", 00:04:43.832 "dma_device_type": 1 00:04:43.832 }, 00:04:43.832 { 00:04:43.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.832 "dma_device_type": 2 00:04:43.832 } 00:04:43.832 ], 00:04:43.832 "driver_specific": { 00:04:43.832 "passthru": { 00:04:43.832 "name": "Passthru0", 00:04:43.832 "base_bdev_name": "Malloc0" 00:04:43.832 } 00:04:43.832 } 00:04:43.832 } 00:04:43.832 ]' 00:04:43.832 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.832 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.832 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.832 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.832 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.832 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.832 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.832 21:36:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.832 00:04:43.832 real 0m0.243s 00:04:43.832 user 0m0.132s 00:04:43.832 sys 0m0.030s 00:04:43.832 ************************************ 00:04:43.832 END TEST rpc_integrity 00:04:43.832 ************************************ 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.832 21:36:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.832 21:36:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.832 21:36:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.832 21:36:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.832 21:36:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.832 ************************************ 00:04:43.832 START TEST rpc_plugins 00:04:43.832 ************************************ 00:04:43.832 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:43.832 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.832 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.832 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.832 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.832 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.832 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.832 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.832 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.832 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.832 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.832 { 00:04:43.832 "name": "Malloc1", 00:04:43.832 "aliases": [ 00:04:43.832 "062433b2-6de2-4fa5-b020-0a372060e110" 00:04:43.832 ], 00:04:43.832 "product_name": "Malloc disk", 00:04:43.832 "block_size": 4096, 00:04:43.832 "num_blocks": 256, 00:04:43.832 "uuid": "062433b2-6de2-4fa5-b020-0a372060e110", 00:04:43.832 "assigned_rate_limits": { 00:04:43.832 "rw_ios_per_sec": 0, 00:04:43.832 "rw_mbytes_per_sec": 0, 00:04:43.832 "r_mbytes_per_sec": 0, 00:04:43.832 "w_mbytes_per_sec": 0 00:04:43.832 }, 00:04:43.832 "claimed": false, 00:04:43.832 "zoned": false, 00:04:43.832 "supported_io_types": { 00:04:43.832 "read": true, 00:04:43.832 "write": true, 00:04:43.832 "unmap": true, 00:04:43.832 "flush": true, 00:04:43.832 "reset": true, 00:04:43.832 "nvme_admin": false, 00:04:43.832 "nvme_io": false, 00:04:43.832 "nvme_io_md": false, 00:04:43.832 "write_zeroes": true, 00:04:43.832 "zcopy": true, 00:04:43.832 "get_zone_info": false, 00:04:43.832 "zone_management": false, 00:04:43.832 "zone_append": false, 00:04:43.832 "compare": false, 00:04:43.832 "compare_and_write": false, 00:04:43.832 "abort": true, 00:04:43.832 "seek_hole": false, 00:04:43.832 "seek_data": false, 00:04:43.832 "copy": true, 00:04:43.832 "nvme_iov_md": false 00:04:43.832 }, 00:04:43.832 "memory_domains": [ 00:04:43.832 { 00:04:43.832 "dma_device_id": "system", 00:04:43.832 "dma_device_type": 1 00:04:43.832 }, 00:04:43.832 { 00:04:43.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.832 "dma_device_type": 2 00:04:43.832 } 00:04:43.832 ], 00:04:43.832 "driver_specific": {} 00:04:43.832 } 00:04:43.832 ]' 00:04:43.832 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:44.091 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:44.091 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:44.091 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.091 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.091 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.091 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:44.091 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.091 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.091 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.091 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:44.091 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:44.091 21:36:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:44.091 00:04:44.091 real 0m0.108s 00:04:44.091 user 0m0.066s 00:04:44.091 sys 0m0.009s 00:04:44.091 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.091 ************************************ 00:04:44.091 END TEST rpc_plugins 00:04:44.091 ************************************ 00:04:44.091 21:36:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.091 21:36:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:44.091 21:36:02 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.091 21:36:02 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.091 21:36:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.091 ************************************ 00:04:44.091 START TEST rpc_trace_cmd_test 00:04:44.091 ************************************ 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:44.091 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57492", 00:04:44.091 "tpoint_group_mask": "0x8", 00:04:44.091 "iscsi_conn": { 00:04:44.091 "mask": "0x2", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "scsi": { 00:04:44.091 "mask": "0x4", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "bdev": { 00:04:44.091 "mask": "0x8", 00:04:44.091 "tpoint_mask": "0xffffffffffffffff" 00:04:44.091 }, 00:04:44.091 "nvmf_rdma": { 00:04:44.091 "mask": "0x10", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "nvmf_tcp": { 00:04:44.091 "mask": "0x20", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "ftl": { 00:04:44.091 "mask": "0x40", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "blobfs": { 00:04:44.091 "mask": "0x80", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "dsa": { 00:04:44.091 "mask": "0x200", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "thread": { 00:04:44.091 "mask": "0x400", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "nvme_pcie": { 00:04:44.091 "mask": "0x800", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "iaa": { 00:04:44.091 "mask": "0x1000", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "nvme_tcp": { 00:04:44.091 "mask": "0x2000", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "bdev_nvme": { 00:04:44.091 "mask": "0x4000", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "sock": { 00:04:44.091 "mask": "0x8000", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "blob": { 00:04:44.091 "mask": "0x10000", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 }, 00:04:44.091 "bdev_raid": { 00:04:44.091 "mask": "0x20000", 00:04:44.091 "tpoint_mask": "0x0" 00:04:44.091 } 00:04:44.091 }' 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.091 21:36:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.091 21:36:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.091 21:36:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.091 21:36:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.091 21:36:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.349 21:36:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.349 00:04:44.349 real 0m0.163s 00:04:44.349 user 0m0.135s 00:04:44.349 sys 0m0.019s 00:04:44.349 21:36:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.349 ************************************ 00:04:44.349 END TEST rpc_trace_cmd_test 00:04:44.349 ************************************ 00:04:44.349 21:36:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.349 21:36:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.350 21:36:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.350 21:36:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.350 21:36:03 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.350 21:36:03 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.350 21:36:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.350 ************************************ 00:04:44.350 START TEST rpc_daemon_integrity 00:04:44.350 ************************************ 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.350 { 00:04:44.350 "name": "Malloc2", 00:04:44.350 "aliases": [ 00:04:44.350 "8b0456a5-4788-425e-8f70-518b72d5c660" 00:04:44.350 ], 00:04:44.350 "product_name": "Malloc disk", 00:04:44.350 "block_size": 512, 00:04:44.350 "num_blocks": 16384, 00:04:44.350 "uuid": "8b0456a5-4788-425e-8f70-518b72d5c660", 00:04:44.350 "assigned_rate_limits": { 00:04:44.350 "rw_ios_per_sec": 0, 00:04:44.350 "rw_mbytes_per_sec": 0, 00:04:44.350 "r_mbytes_per_sec": 0, 00:04:44.350 "w_mbytes_per_sec": 0 00:04:44.350 }, 00:04:44.350 "claimed": false, 00:04:44.350 "zoned": false, 00:04:44.350 "supported_io_types": { 00:04:44.350 "read": true, 00:04:44.350 "write": true, 00:04:44.350 "unmap": true, 00:04:44.350 "flush": true, 00:04:44.350 "reset": true, 00:04:44.350 "nvme_admin": false, 00:04:44.350 "nvme_io": false, 00:04:44.350 "nvme_io_md": false, 00:04:44.350 "write_zeroes": true, 00:04:44.350 "zcopy": true, 00:04:44.350 "get_zone_info": false, 00:04:44.350 "zone_management": false, 00:04:44.350 "zone_append": false, 00:04:44.350 "compare": false, 00:04:44.350 "compare_and_write": false, 00:04:44.350 "abort": true, 00:04:44.350 "seek_hole": false, 00:04:44.350 "seek_data": false, 00:04:44.350 "copy": true, 00:04:44.350 "nvme_iov_md": false 00:04:44.350 }, 00:04:44.350 "memory_domains": [ 00:04:44.350 { 00:04:44.350 "dma_device_id": "system", 00:04:44.350 "dma_device_type": 1 00:04:44.350 }, 00:04:44.350 { 00:04:44.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.350 "dma_device_type": 2 00:04:44.350 } 00:04:44.350 ], 00:04:44.350 "driver_specific": {} 00:04:44.350 } 00:04:44.350 ]' 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.350 [2024-09-29 21:36:03.252924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.350 [2024-09-29 21:36:03.252979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.350 [2024-09-29 21:36:03.252999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:44.350 [2024-09-29 21:36:03.253010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.350 [2024-09-29 21:36:03.255222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.350 [2024-09-29 21:36:03.255258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.350 Passthru0 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.350 { 00:04:44.350 "name": "Malloc2", 00:04:44.350 "aliases": [ 00:04:44.350 "8b0456a5-4788-425e-8f70-518b72d5c660" 00:04:44.350 ], 00:04:44.350 "product_name": "Malloc disk", 00:04:44.350 "block_size": 512, 00:04:44.350 "num_blocks": 16384, 00:04:44.350 "uuid": "8b0456a5-4788-425e-8f70-518b72d5c660", 00:04:44.350 "assigned_rate_limits": { 00:04:44.350 "rw_ios_per_sec": 0, 00:04:44.350 "rw_mbytes_per_sec": 0, 00:04:44.350 "r_mbytes_per_sec": 0, 00:04:44.350 "w_mbytes_per_sec": 0 00:04:44.350 }, 00:04:44.350 "claimed": true, 00:04:44.350 "claim_type": "exclusive_write", 00:04:44.350 "zoned": false, 00:04:44.350 "supported_io_types": { 00:04:44.350 "read": true, 00:04:44.350 "write": true, 00:04:44.350 "unmap": true, 00:04:44.350 "flush": true, 00:04:44.350 "reset": true, 00:04:44.350 "nvme_admin": false, 00:04:44.350 "nvme_io": false, 00:04:44.350 "nvme_io_md": false, 00:04:44.350 "write_zeroes": true, 00:04:44.350 "zcopy": true, 00:04:44.350 "get_zone_info": false, 00:04:44.350 "zone_management": false, 00:04:44.350 "zone_append": false, 00:04:44.350 "compare": false, 00:04:44.350 "compare_and_write": false, 00:04:44.350 "abort": true, 00:04:44.350 "seek_hole": false, 00:04:44.350 "seek_data": false, 00:04:44.350 "copy": true, 00:04:44.350 "nvme_iov_md": false 00:04:44.350 }, 00:04:44.350 "memory_domains": [ 00:04:44.350 { 00:04:44.350 "dma_device_id": "system", 00:04:44.350 "dma_device_type": 1 00:04:44.350 }, 00:04:44.350 { 00:04:44.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.350 "dma_device_type": 2 00:04:44.350 } 00:04:44.350 ], 00:04:44.350 "driver_specific": {} 00:04:44.350 }, 00:04:44.350 { 00:04:44.350 "name": "Passthru0", 00:04:44.350 "aliases": [ 00:04:44.350 "42ce1991-192b-543a-b242-ee2d388db3f8" 00:04:44.350 ], 00:04:44.350 "product_name": "passthru", 00:04:44.350 "block_size": 512, 00:04:44.350 "num_blocks": 16384, 00:04:44.350 "uuid": "42ce1991-192b-543a-b242-ee2d388db3f8", 00:04:44.350 "assigned_rate_limits": { 00:04:44.350 "rw_ios_per_sec": 0, 00:04:44.350 "rw_mbytes_per_sec": 0, 00:04:44.350 "r_mbytes_per_sec": 0, 00:04:44.350 "w_mbytes_per_sec": 0 00:04:44.350 }, 00:04:44.350 "claimed": false, 00:04:44.350 "zoned": false, 00:04:44.350 "supported_io_types": { 00:04:44.350 "read": true, 00:04:44.350 "write": true, 00:04:44.350 "unmap": true, 00:04:44.350 "flush": true, 00:04:44.350 "reset": true, 00:04:44.350 "nvme_admin": false, 00:04:44.350 "nvme_io": false, 00:04:44.350 "nvme_io_md": false, 00:04:44.350 "write_zeroes": true, 00:04:44.350 "zcopy": true, 00:04:44.350 "get_zone_info": false, 00:04:44.350 "zone_management": false, 00:04:44.350 "zone_append": false, 00:04:44.350 "compare": false, 00:04:44.350 "compare_and_write": false, 00:04:44.350 "abort": true, 00:04:44.350 "seek_hole": false, 00:04:44.350 "seek_data": false, 00:04:44.350 "copy": true, 00:04:44.350 "nvme_iov_md": false 00:04:44.350 }, 00:04:44.350 "memory_domains": [ 00:04:44.350 { 00:04:44.350 "dma_device_id": "system", 00:04:44.350 "dma_device_type": 1 00:04:44.350 }, 00:04:44.350 { 00:04:44.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.350 "dma_device_type": 2 00:04:44.350 } 00:04:44.350 ], 00:04:44.350 "driver_specific": { 00:04:44.350 "passthru": { 00:04:44.350 "name": "Passthru0", 00:04:44.350 "base_bdev_name": "Malloc2" 00:04:44.350 } 00:04:44.350 } 00:04:44.350 } 00:04:44.350 ]' 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.350 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.608 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.608 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.608 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.608 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.608 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.608 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.608 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.608 21:36:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.608 00:04:44.608 real 0m0.245s 00:04:44.608 user 0m0.131s 00:04:44.608 sys 0m0.029s 00:04:44.609 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.609 21:36:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.609 ************************************ 00:04:44.609 END TEST rpc_daemon_integrity 00:04:44.609 ************************************ 00:04:44.609 21:36:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.609 21:36:03 rpc -- rpc/rpc.sh@84 -- # killprocess 57492 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@950 -- # '[' -z 57492 ']' 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@954 -- # kill -0 57492 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@955 -- # uname 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57492 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.609 killing process with pid 57492 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57492' 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@969 -- # kill 57492 00:04:44.609 21:36:03 rpc -- common/autotest_common.sh@974 -- # wait 57492 00:04:46.514 00:04:46.514 real 0m3.844s 00:04:46.514 user 0m4.214s 00:04:46.514 sys 0m0.645s 00:04:46.514 ************************************ 00:04:46.514 END TEST rpc 00:04:46.514 ************************************ 00:04:46.514 21:36:05 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.514 21:36:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.514 21:36:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:46.514 21:36:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.514 21:36:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.514 21:36:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.514 ************************************ 00:04:46.514 START TEST skip_rpc 00:04:46.514 ************************************ 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:46.514 * Looking for test storage... 00:04:46.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.514 21:36:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:46.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.514 --rc genhtml_branch_coverage=1 00:04:46.514 --rc genhtml_function_coverage=1 00:04:46.514 --rc genhtml_legend=1 00:04:46.514 --rc geninfo_all_blocks=1 00:04:46.514 --rc geninfo_unexecuted_blocks=1 00:04:46.514 00:04:46.514 ' 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:46.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.514 --rc genhtml_branch_coverage=1 00:04:46.514 --rc genhtml_function_coverage=1 00:04:46.514 --rc genhtml_legend=1 00:04:46.514 --rc geninfo_all_blocks=1 00:04:46.514 --rc geninfo_unexecuted_blocks=1 00:04:46.514 00:04:46.514 ' 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:46.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.514 --rc genhtml_branch_coverage=1 00:04:46.514 --rc genhtml_function_coverage=1 00:04:46.514 --rc genhtml_legend=1 00:04:46.514 --rc geninfo_all_blocks=1 00:04:46.514 --rc geninfo_unexecuted_blocks=1 00:04:46.514 00:04:46.514 ' 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:46.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.514 --rc genhtml_branch_coverage=1 00:04:46.514 --rc genhtml_function_coverage=1 00:04:46.514 --rc genhtml_legend=1 00:04:46.514 --rc geninfo_all_blocks=1 00:04:46.514 --rc geninfo_unexecuted_blocks=1 00:04:46.514 00:04:46.514 ' 00:04:46.514 21:36:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:46.514 21:36:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:46.514 21:36:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.514 21:36:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.514 ************************************ 00:04:46.514 START TEST skip_rpc 00:04:46.514 ************************************ 00:04:46.514 21:36:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:46.514 21:36:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57710 00:04:46.514 21:36:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.514 21:36:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:46.514 21:36:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:46.514 [2024-09-29 21:36:05.347612] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:46.514 [2024-09-29 21:36:05.347751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57710 ] 00:04:46.774 [2024-09-29 21:36:05.499489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.774 [2024-09-29 21:36:05.725799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57710 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57710 ']' 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57710 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57710 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.053 killing process with pid 57710 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57710' 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57710 00:04:52.053 21:36:10 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57710 00:04:52.619 00:04:52.620 real 0m6.290s 00:04:52.620 user 0m5.846s 00:04:52.620 sys 0m0.340s 00:04:52.620 21:36:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.620 21:36:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.620 ************************************ 00:04:52.620 END TEST skip_rpc 00:04:52.620 ************************************ 00:04:52.620 21:36:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.620 21:36:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.620 21:36:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.620 21:36:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.620 ************************************ 00:04:52.620 START TEST skip_rpc_with_json 00:04:52.620 ************************************ 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57803 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57803 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57803 ']' 00:04:52.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.620 21:36:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.878 [2024-09-29 21:36:11.676821] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:52.878 [2024-09-29 21:36:11.676945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57803 ] 00:04:52.878 [2024-09-29 21:36:11.825634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.138 [2024-09-29 21:36:11.971707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.706 [2024-09-29 21:36:12.514051] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:53.706 request: 00:04:53.706 { 00:04:53.706 "trtype": "tcp", 00:04:53.706 "method": "nvmf_get_transports", 00:04:53.706 "req_id": 1 00:04:53.706 } 00:04:53.706 Got JSON-RPC error response 00:04:53.706 response: 00:04:53.706 { 00:04:53.706 "code": -19, 00:04:53.706 "message": "No such device" 00:04:53.706 } 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.706 [2024-09-29 21:36:12.526121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.706 21:36:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.706 { 00:04:53.706 "subsystems": [ 00:04:53.706 { 00:04:53.706 "subsystem": "fsdev", 00:04:53.706 "config": [ 00:04:53.706 { 00:04:53.706 "method": "fsdev_set_opts", 00:04:53.706 "params": { 00:04:53.706 "fsdev_io_pool_size": 65535, 00:04:53.706 "fsdev_io_cache_size": 256 00:04:53.706 } 00:04:53.706 } 00:04:53.706 ] 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "subsystem": "keyring", 00:04:53.706 "config": [] 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "subsystem": "iobuf", 00:04:53.706 "config": [ 00:04:53.706 { 00:04:53.706 "method": "iobuf_set_options", 00:04:53.706 "params": { 00:04:53.706 "small_pool_count": 8192, 00:04:53.706 "large_pool_count": 1024, 00:04:53.706 "small_bufsize": 8192, 00:04:53.706 "large_bufsize": 135168 00:04:53.706 } 00:04:53.706 } 00:04:53.706 ] 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "subsystem": "sock", 00:04:53.706 "config": [ 00:04:53.706 { 00:04:53.706 "method": "sock_set_default_impl", 00:04:53.706 "params": { 00:04:53.706 "impl_name": "posix" 00:04:53.706 } 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "method": "sock_impl_set_options", 00:04:53.706 "params": { 00:04:53.706 "impl_name": "ssl", 00:04:53.706 "recv_buf_size": 4096, 00:04:53.706 "send_buf_size": 4096, 00:04:53.706 "enable_recv_pipe": true, 00:04:53.706 "enable_quickack": false, 00:04:53.706 "enable_placement_id": 0, 00:04:53.706 "enable_zerocopy_send_server": true, 00:04:53.706 "enable_zerocopy_send_client": false, 00:04:53.706 "zerocopy_threshold": 0, 00:04:53.706 "tls_version": 0, 00:04:53.706 "enable_ktls": false 00:04:53.706 } 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "method": "sock_impl_set_options", 00:04:53.706 "params": { 00:04:53.706 "impl_name": "posix", 00:04:53.706 "recv_buf_size": 2097152, 00:04:53.706 "send_buf_size": 2097152, 00:04:53.706 "enable_recv_pipe": true, 00:04:53.706 "enable_quickack": false, 00:04:53.706 "enable_placement_id": 0, 00:04:53.706 "enable_zerocopy_send_server": true, 00:04:53.706 "enable_zerocopy_send_client": false, 00:04:53.706 "zerocopy_threshold": 0, 00:04:53.706 "tls_version": 0, 00:04:53.706 "enable_ktls": false 00:04:53.706 } 00:04:53.706 } 00:04:53.706 ] 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "subsystem": "vmd", 00:04:53.706 "config": [] 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "subsystem": "accel", 00:04:53.706 "config": [ 00:04:53.706 { 00:04:53.706 "method": "accel_set_options", 00:04:53.706 "params": { 00:04:53.706 "small_cache_size": 128, 00:04:53.706 "large_cache_size": 16, 00:04:53.706 "task_count": 2048, 00:04:53.706 "sequence_count": 2048, 00:04:53.706 "buf_count": 2048 00:04:53.706 } 00:04:53.706 } 00:04:53.706 ] 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "subsystem": "bdev", 00:04:53.706 "config": [ 00:04:53.706 { 00:04:53.706 "method": "bdev_set_options", 00:04:53.706 "params": { 00:04:53.706 "bdev_io_pool_size": 65535, 00:04:53.706 "bdev_io_cache_size": 256, 00:04:53.706 "bdev_auto_examine": true, 00:04:53.706 "iobuf_small_cache_size": 128, 00:04:53.706 "iobuf_large_cache_size": 16 00:04:53.706 } 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "method": "bdev_raid_set_options", 00:04:53.706 "params": { 00:04:53.706 "process_window_size_kb": 1024, 00:04:53.706 "process_max_bandwidth_mb_sec": 0 00:04:53.706 } 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "method": "bdev_iscsi_set_options", 00:04:53.706 "params": { 00:04:53.706 "timeout_sec": 30 00:04:53.706 } 00:04:53.706 }, 00:04:53.706 { 00:04:53.706 "method": "bdev_nvme_set_options", 00:04:53.706 "params": { 00:04:53.706 "action_on_timeout": "none", 00:04:53.706 "timeout_us": 0, 00:04:53.706 "timeout_admin_us": 0, 00:04:53.706 "keep_alive_timeout_ms": 10000, 00:04:53.706 "arbitration_burst": 0, 00:04:53.706 "low_priority_weight": 0, 00:04:53.706 "medium_priority_weight": 0, 00:04:53.706 "high_priority_weight": 0, 00:04:53.706 "nvme_adminq_poll_period_us": 10000, 00:04:53.706 "nvme_ioq_poll_period_us": 0, 00:04:53.706 "io_queue_requests": 0, 00:04:53.706 "delay_cmd_submit": true, 00:04:53.706 "transport_retry_count": 4, 00:04:53.706 "bdev_retry_count": 3, 00:04:53.707 "transport_ack_timeout": 0, 00:04:53.707 "ctrlr_loss_timeout_sec": 0, 00:04:53.707 "reconnect_delay_sec": 0, 00:04:53.707 "fast_io_fail_timeout_sec": 0, 00:04:53.707 "disable_auto_failback": false, 00:04:53.707 "generate_uuids": false, 00:04:53.707 "transport_tos": 0, 00:04:53.707 "nvme_error_stat": false, 00:04:53.707 "rdma_srq_size": 0, 00:04:53.707 "io_path_stat": false, 00:04:53.707 "allow_accel_sequence": false, 00:04:53.707 "rdma_max_cq_size": 0, 00:04:53.707 "rdma_cm_event_timeout_ms": 0, 00:04:53.707 "dhchap_digests": [ 00:04:53.707 "sha256", 00:04:53.707 "sha384", 00:04:53.707 "sha512" 00:04:53.707 ], 00:04:53.707 "dhchap_dhgroups": [ 00:04:53.707 "null", 00:04:53.707 "ffdhe2048", 00:04:53.707 "ffdhe3072", 00:04:53.707 "ffdhe4096", 00:04:53.707 "ffdhe6144", 00:04:53.707 "ffdhe8192" 00:04:53.707 ] 00:04:53.707 } 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "method": "bdev_nvme_set_hotplug", 00:04:53.707 "params": { 00:04:53.707 "period_us": 100000, 00:04:53.707 "enable": false 00:04:53.707 } 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "method": "bdev_wait_for_examine" 00:04:53.707 } 00:04:53.707 ] 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "subsystem": "scsi", 00:04:53.707 "config": null 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "subsystem": "scheduler", 00:04:53.707 "config": [ 00:04:53.707 { 00:04:53.707 "method": "framework_set_scheduler", 00:04:53.707 "params": { 00:04:53.707 "name": "static" 00:04:53.707 } 00:04:53.707 } 00:04:53.707 ] 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "subsystem": "vhost_scsi", 00:04:53.707 "config": [] 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "subsystem": "vhost_blk", 00:04:53.707 "config": [] 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "subsystem": "ublk", 00:04:53.707 "config": [] 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "subsystem": "nbd", 00:04:53.707 "config": [] 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "subsystem": "nvmf", 00:04:53.707 "config": [ 00:04:53.707 { 00:04:53.707 "method": "nvmf_set_config", 00:04:53.707 "params": { 00:04:53.707 "discovery_filter": "match_any", 00:04:53.707 "admin_cmd_passthru": { 00:04:53.707 "identify_ctrlr": false 00:04:53.707 }, 00:04:53.707 "dhchap_digests": [ 00:04:53.707 "sha256", 00:04:53.707 "sha384", 00:04:53.707 "sha512" 00:04:53.707 ], 00:04:53.707 "dhchap_dhgroups": [ 00:04:53.707 "null", 00:04:53.707 "ffdhe2048", 00:04:53.707 "ffdhe3072", 00:04:53.707 "ffdhe4096", 00:04:53.707 "ffdhe6144", 00:04:53.707 "ffdhe8192" 00:04:53.707 ] 00:04:53.707 } 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "method": "nvmf_set_max_subsystems", 00:04:53.707 "params": { 00:04:53.707 "max_subsystems": 1024 00:04:53.707 } 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "method": "nvmf_set_crdt", 00:04:53.707 "params": { 00:04:53.707 "crdt1": 0, 00:04:53.707 "crdt2": 0, 00:04:53.707 "crdt3": 0 00:04:53.707 } 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "method": "nvmf_create_transport", 00:04:53.707 "params": { 00:04:53.707 "trtype": "TCP", 00:04:53.707 "max_queue_depth": 128, 00:04:53.707 "max_io_qpairs_per_ctrlr": 127, 00:04:53.707 "in_capsule_data_size": 4096, 00:04:53.707 "max_io_size": 131072, 00:04:53.707 "io_unit_size": 131072, 00:04:53.707 "max_aq_depth": 128, 00:04:53.707 "num_shared_buffers": 511, 00:04:53.707 "buf_cache_size": 4294967295, 00:04:53.707 "dif_insert_or_strip": false, 00:04:53.707 "zcopy": false, 00:04:53.707 "c2h_success": true, 00:04:53.707 "sock_priority": 0, 00:04:53.707 "abort_timeout_sec": 1, 00:04:53.707 "ack_timeout": 0, 00:04:53.707 "data_wr_pool_size": 0 00:04:53.707 } 00:04:53.707 } 00:04:53.707 ] 00:04:53.707 }, 00:04:53.707 { 00:04:53.707 "subsystem": "iscsi", 00:04:53.707 "config": [ 00:04:53.707 { 00:04:53.707 "method": "iscsi_set_options", 00:04:53.707 "params": { 00:04:53.707 "node_base": "iqn.2016-06.io.spdk", 00:04:53.707 "max_sessions": 128, 00:04:53.707 "max_connections_per_session": 2, 00:04:53.707 "max_queue_depth": 64, 00:04:53.707 "default_time2wait": 2, 00:04:53.707 "default_time2retain": 20, 00:04:53.707 "first_burst_length": 8192, 00:04:53.707 "immediate_data": true, 00:04:53.707 "allow_duplicated_isid": false, 00:04:53.707 "error_recovery_level": 0, 00:04:53.707 "nop_timeout": 60, 00:04:53.707 "nop_in_interval": 30, 00:04:53.707 "disable_chap": false, 00:04:53.707 "require_chap": false, 00:04:53.707 "mutual_chap": false, 00:04:53.707 "chap_group": 0, 00:04:53.707 "max_large_datain_per_connection": 64, 00:04:53.707 "max_r2t_per_connection": 4, 00:04:53.707 "pdu_pool_size": 36864, 00:04:53.707 "immediate_data_pool_size": 16384, 00:04:53.707 "data_out_pool_size": 2048 00:04:53.707 } 00:04:53.707 } 00:04:53.707 ] 00:04:53.707 } 00:04:53.707 ] 00:04:53.707 } 00:04:53.707 21:36:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:53.707 21:36:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57803 00:04:53.707 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57803 ']' 00:04:53.707 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57803 00:04:53.707 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:53.966 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.966 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57803 00:04:53.966 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.966 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.966 killing process with pid 57803 00:04:53.966 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57803' 00:04:53.966 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57803 00:04:53.966 21:36:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57803 00:04:55.341 21:36:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57843 00:04:55.341 21:36:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:55.341 21:36:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57843 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57843 ']' 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57843 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57843 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.604 killing process with pid 57843 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57843' 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57843 00:05:00.604 21:36:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57843 00:05:01.231 21:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:01.231 21:36:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:01.231 00:05:01.231 real 0m8.611s 00:05:01.231 user 0m8.236s 00:05:01.231 sys 0m0.595s 00:05:01.231 21:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.231 ************************************ 00:05:01.231 END TEST skip_rpc_with_json 00:05:01.231 ************************************ 00:05:01.231 21:36:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.489 21:36:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:01.489 21:36:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.489 21:36:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.489 21:36:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.489 ************************************ 00:05:01.489 START TEST skip_rpc_with_delay 00:05:01.489 ************************************ 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:01.489 [2024-09-29 21:36:20.324782] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:01.489 [2024-09-29 21:36:20.324892] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:01.489 00:05:01.489 real 0m0.119s 00:05:01.489 user 0m0.073s 00:05:01.489 sys 0m0.044s 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.489 ************************************ 00:05:01.489 END TEST skip_rpc_with_delay 00:05:01.489 ************************************ 00:05:01.489 21:36:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:01.489 21:36:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:01.489 21:36:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:01.489 21:36:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:01.489 21:36:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.489 21:36:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.489 21:36:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.489 ************************************ 00:05:01.489 START TEST exit_on_failed_rpc_init 00:05:01.489 ************************************ 00:05:01.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57965 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57965 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57965 ']' 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.489 21:36:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.748 [2024-09-29 21:36:20.516716] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:01.748 [2024-09-29 21:36:20.516892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57965 ] 00:05:01.748 [2024-09-29 21:36:20.684152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.006 [2024-09-29 21:36:20.862440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:02.572 21:36:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:02.572 [2024-09-29 21:36:21.521660] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:02.572 [2024-09-29 21:36:21.521962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57983 ] 00:05:02.831 [2024-09-29 21:36:21.671598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.090 [2024-09-29 21:36:21.855205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.090 [2024-09-29 21:36:21.855286] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:03.090 [2024-09-29 21:36:21.855299] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:03.090 [2024-09-29 21:36:21.855310] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57965 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57965 ']' 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57965 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57965 00:05:03.348 killing process with pid 57965 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57965' 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57965 00:05:03.348 21:36:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57965 00:05:04.720 00:05:04.720 real 0m2.992s 00:05:04.720 user 0m3.488s 00:05:04.720 sys 0m0.459s 00:05:04.720 21:36:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.720 21:36:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.720 ************************************ 00:05:04.720 END TEST exit_on_failed_rpc_init 00:05:04.720 ************************************ 00:05:04.720 21:36:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.720 00:05:04.720 real 0m18.321s 00:05:04.720 user 0m17.776s 00:05:04.720 sys 0m1.610s 00:05:04.720 21:36:23 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.720 21:36:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.720 ************************************ 00:05:04.720 END TEST skip_rpc 00:05:04.720 ************************************ 00:05:04.720 21:36:23 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:04.720 21:36:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.720 21:36:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.720 21:36:23 -- common/autotest_common.sh@10 -- # set +x 00:05:04.720 ************************************ 00:05:04.720 START TEST rpc_client 00:05:04.720 ************************************ 00:05:04.720 21:36:23 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:04.720 * Looking for test storage... 00:05:04.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:04.720 21:36:23 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:04.720 21:36:23 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:04.720 21:36:23 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.720 21:36:23 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:04.720 21:36:23 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:04.721 21:36:23 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.721 21:36:23 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:04.721 21:36:23 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.721 21:36:23 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.721 21:36:23 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.721 21:36:23 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:04.721 21:36:23 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.721 21:36:23 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:04.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.721 --rc genhtml_branch_coverage=1 00:05:04.721 --rc genhtml_function_coverage=1 00:05:04.721 --rc genhtml_legend=1 00:05:04.721 --rc geninfo_all_blocks=1 00:05:04.721 --rc geninfo_unexecuted_blocks=1 00:05:04.721 00:05:04.721 ' 00:05:04.721 21:36:23 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:04.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.721 --rc genhtml_branch_coverage=1 00:05:04.721 --rc genhtml_function_coverage=1 00:05:04.721 --rc genhtml_legend=1 00:05:04.721 --rc geninfo_all_blocks=1 00:05:04.721 --rc geninfo_unexecuted_blocks=1 00:05:04.721 00:05:04.721 ' 00:05:04.721 21:36:23 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:04.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.721 --rc genhtml_branch_coverage=1 00:05:04.721 --rc genhtml_function_coverage=1 00:05:04.721 --rc genhtml_legend=1 00:05:04.721 --rc geninfo_all_blocks=1 00:05:04.721 --rc geninfo_unexecuted_blocks=1 00:05:04.721 00:05:04.721 ' 00:05:04.721 21:36:23 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:04.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.721 --rc genhtml_branch_coverage=1 00:05:04.721 --rc genhtml_function_coverage=1 00:05:04.721 --rc genhtml_legend=1 00:05:04.721 --rc geninfo_all_blocks=1 00:05:04.721 --rc geninfo_unexecuted_blocks=1 00:05:04.721 00:05:04.721 ' 00:05:04.721 21:36:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:04.721 OK 00:05:04.721 21:36:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:04.721 00:05:04.721 real 0m0.182s 00:05:04.721 user 0m0.105s 00:05:04.721 sys 0m0.085s 00:05:04.721 21:36:23 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.721 21:36:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:04.721 ************************************ 00:05:04.721 END TEST rpc_client 00:05:04.721 ************************************ 00:05:04.721 21:36:23 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:04.721 21:36:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.721 21:36:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.721 21:36:23 -- common/autotest_common.sh@10 -- # set +x 00:05:04.721 ************************************ 00:05:04.721 START TEST json_config 00:05:04.721 ************************************ 00:05:04.721 21:36:23 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:04.979 21:36:23 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:04.979 21:36:23 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:04.979 21:36:23 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.979 21:36:23 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:04.979 21:36:23 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.979 21:36:23 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.979 21:36:23 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.979 21:36:23 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.979 21:36:23 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.979 21:36:23 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.979 21:36:23 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.979 21:36:23 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.979 21:36:23 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.979 21:36:23 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.979 21:36:23 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.979 21:36:23 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:04.979 21:36:23 json_config -- scripts/common.sh@345 -- # : 1 00:05:04.979 21:36:23 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.979 21:36:23 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.979 21:36:23 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:04.979 21:36:23 json_config -- scripts/common.sh@353 -- # local d=1 00:05:04.979 21:36:23 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.979 21:36:23 json_config -- scripts/common.sh@355 -- # echo 1 00:05:04.979 21:36:23 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.979 21:36:23 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:04.979 21:36:23 json_config -- scripts/common.sh@353 -- # local d=2 00:05:04.979 21:36:23 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.979 21:36:23 json_config -- scripts/common.sh@355 -- # echo 2 00:05:04.979 21:36:23 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.979 21:36:23 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.979 21:36:23 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.979 21:36:23 json_config -- scripts/common.sh@368 -- # return 0 00:05:04.979 21:36:23 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.979 21:36:23 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 21:36:23 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 21:36:23 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 21:36:23 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:04.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.980 --rc genhtml_branch_coverage=1 00:05:04.980 --rc genhtml_function_coverage=1 00:05:04.980 --rc genhtml_legend=1 00:05:04.980 --rc geninfo_all_blocks=1 00:05:04.980 --rc geninfo_unexecuted_blocks=1 00:05:04.980 00:05:04.980 ' 00:05:04.980 21:36:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a23feacc-ffd4-4573-b7c3-e3cf82b0b04d 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a23feacc-ffd4-4573-b7c3-e3cf82b0b04d 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:04.980 21:36:23 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.980 21:36:23 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.980 21:36:23 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.980 21:36:23 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.980 21:36:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.980 21:36:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.980 21:36:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.980 21:36:23 json_config -- paths/export.sh@5 -- # export PATH 00:05:04.980 21:36:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@51 -- # : 0 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.980 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.980 21:36:23 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.980 21:36:23 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:04.980 21:36:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:04.980 21:36:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:04.980 21:36:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:04.980 21:36:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:04.980 21:36:23 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:04.980 WARNING: No tests are enabled so not running JSON configuration tests 00:05:04.980 21:36:23 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:04.980 00:05:04.980 real 0m0.140s 00:05:04.980 user 0m0.084s 00:05:04.980 sys 0m0.057s 00:05:04.980 21:36:23 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.980 21:36:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.980 ************************************ 00:05:04.980 END TEST json_config 00:05:04.980 ************************************ 00:05:04.980 21:36:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:04.980 21:36:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.980 21:36:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.980 21:36:23 -- common/autotest_common.sh@10 -- # set +x 00:05:04.980 ************************************ 00:05:04.980 START TEST json_config_extra_key 00:05:04.980 ************************************ 00:05:04.980 21:36:23 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:04.980 21:36:23 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:04.980 21:36:23 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.980 21:36:23 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:05.238 21:36:23 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.238 21:36:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:05.238 21:36:23 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.238 21:36:23 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:05.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.238 --rc genhtml_branch_coverage=1 00:05:05.239 --rc genhtml_function_coverage=1 00:05:05.239 --rc genhtml_legend=1 00:05:05.239 --rc geninfo_all_blocks=1 00:05:05.239 --rc geninfo_unexecuted_blocks=1 00:05:05.239 00:05:05.239 ' 00:05:05.239 21:36:23 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:05.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.239 --rc genhtml_branch_coverage=1 00:05:05.239 --rc genhtml_function_coverage=1 00:05:05.239 --rc genhtml_legend=1 00:05:05.239 --rc geninfo_all_blocks=1 00:05:05.239 --rc geninfo_unexecuted_blocks=1 00:05:05.239 00:05:05.239 ' 00:05:05.239 21:36:23 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:05.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.239 --rc genhtml_branch_coverage=1 00:05:05.239 --rc genhtml_function_coverage=1 00:05:05.239 --rc genhtml_legend=1 00:05:05.239 --rc geninfo_all_blocks=1 00:05:05.239 --rc geninfo_unexecuted_blocks=1 00:05:05.239 00:05:05.239 ' 00:05:05.239 21:36:23 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:05.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.239 --rc genhtml_branch_coverage=1 00:05:05.239 --rc genhtml_function_coverage=1 00:05:05.239 --rc genhtml_legend=1 00:05:05.239 --rc geninfo_all_blocks=1 00:05:05.239 --rc geninfo_unexecuted_blocks=1 00:05:05.239 00:05:05.239 ' 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a23feacc-ffd4-4573-b7c3-e3cf82b0b04d 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a23feacc-ffd4-4573-b7c3-e3cf82b0b04d 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:05.239 21:36:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.239 21:36:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.239 21:36:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.239 21:36:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.239 21:36:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.239 21:36:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.239 21:36:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.239 21:36:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:05.239 21:36:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.239 21:36:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:05.239 INFO: launching applications... 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:05.239 21:36:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58177 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.239 Waiting for target to run... 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58177 /var/tmp/spdk_tgt.sock 00:05:05.239 21:36:23 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58177 ']' 00:05:05.239 21:36:23 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.239 21:36:23 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:05.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.239 21:36:23 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.239 21:36:23 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:05.239 21:36:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.239 21:36:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:05.239 [2024-09-29 21:36:24.071991] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:05.239 [2024-09-29 21:36:24.072115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58177 ] 00:05:05.497 [2024-09-29 21:36:24.377430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.755 [2024-09-29 21:36:24.513858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.014 21:36:24 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.014 00:05:06.014 21:36:24 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:06.014 21:36:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:06.014 INFO: shutting down applications... 00:05:06.014 21:36:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:06.014 21:36:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:06.014 21:36:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:06.014 21:36:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.014 21:36:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58177 ]] 00:05:06.014 21:36:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58177 00:05:06.014 21:36:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.014 21:36:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.014 21:36:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58177 00:05:06.014 21:36:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.580 21:36:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.580 21:36:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.580 21:36:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58177 00:05:06.580 21:36:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.147 21:36:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.147 21:36:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.147 21:36:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58177 00:05:07.147 21:36:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.714 21:36:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.714 21:36:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.714 21:36:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58177 00:05:07.714 21:36:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.714 21:36:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:07.714 SPDK target shutdown done 00:05:07.714 21:36:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.714 21:36:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.714 Success 00:05:07.714 21:36:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:07.714 00:05:07.714 real 0m2.540s 00:05:07.714 user 0m2.299s 00:05:07.714 sys 0m0.376s 00:05:07.714 21:36:26 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.714 21:36:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.714 ************************************ 00:05:07.714 END TEST json_config_extra_key 00:05:07.714 ************************************ 00:05:07.714 21:36:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.714 21:36:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.714 21:36:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.714 21:36:26 -- common/autotest_common.sh@10 -- # set +x 00:05:07.714 ************************************ 00:05:07.714 START TEST alias_rpc 00:05:07.714 ************************************ 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.714 * Looking for test storage... 00:05:07.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.714 21:36:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:07.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.714 --rc genhtml_branch_coverage=1 00:05:07.714 --rc genhtml_function_coverage=1 00:05:07.714 --rc genhtml_legend=1 00:05:07.714 --rc geninfo_all_blocks=1 00:05:07.714 --rc geninfo_unexecuted_blocks=1 00:05:07.714 00:05:07.714 ' 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:07.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.714 --rc genhtml_branch_coverage=1 00:05:07.714 --rc genhtml_function_coverage=1 00:05:07.714 --rc genhtml_legend=1 00:05:07.714 --rc geninfo_all_blocks=1 00:05:07.714 --rc geninfo_unexecuted_blocks=1 00:05:07.714 00:05:07.714 ' 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:07.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.714 --rc genhtml_branch_coverage=1 00:05:07.714 --rc genhtml_function_coverage=1 00:05:07.714 --rc genhtml_legend=1 00:05:07.714 --rc geninfo_all_blocks=1 00:05:07.714 --rc geninfo_unexecuted_blocks=1 00:05:07.714 00:05:07.714 ' 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:07.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.714 --rc genhtml_branch_coverage=1 00:05:07.714 --rc genhtml_function_coverage=1 00:05:07.714 --rc genhtml_legend=1 00:05:07.714 --rc geninfo_all_blocks=1 00:05:07.714 --rc geninfo_unexecuted_blocks=1 00:05:07.714 00:05:07.714 ' 00:05:07.714 21:36:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.714 21:36:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58263 00:05:07.714 21:36:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58263 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58263 ']' 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.714 21:36:26 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.715 21:36:26 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.715 21:36:26 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.715 21:36:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.715 21:36:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.715 [2024-09-29 21:36:26.650228] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:07.715 [2024-09-29 21:36:26.650771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58263 ] 00:05:07.972 [2024-09-29 21:36:26.798054] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.972 [2024-09-29 21:36:26.949329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.539 21:36:27 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.539 21:36:27 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:08.539 21:36:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:08.798 21:36:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58263 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58263 ']' 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58263 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58263 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.798 killing process with pid 58263 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58263' 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@969 -- # kill 58263 00:05:08.798 21:36:27 alias_rpc -- common/autotest_common.sh@974 -- # wait 58263 00:05:10.172 00:05:10.172 real 0m2.490s 00:05:10.172 user 0m2.623s 00:05:10.172 sys 0m0.361s 00:05:10.172 21:36:28 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.172 ************************************ 00:05:10.172 END TEST alias_rpc 00:05:10.172 21:36:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.172 ************************************ 00:05:10.172 21:36:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:10.172 21:36:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:10.172 21:36:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.172 21:36:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.172 21:36:28 -- common/autotest_common.sh@10 -- # set +x 00:05:10.172 ************************************ 00:05:10.172 START TEST spdkcli_tcp 00:05:10.172 ************************************ 00:05:10.172 21:36:28 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:10.172 * Looking for test storage... 00:05:10.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.172 21:36:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.172 --rc genhtml_branch_coverage=1 00:05:10.172 --rc genhtml_function_coverage=1 00:05:10.172 --rc genhtml_legend=1 00:05:10.172 --rc geninfo_all_blocks=1 00:05:10.172 --rc geninfo_unexecuted_blocks=1 00:05:10.172 00:05:10.172 ' 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.172 --rc genhtml_branch_coverage=1 00:05:10.172 --rc genhtml_function_coverage=1 00:05:10.172 --rc genhtml_legend=1 00:05:10.172 --rc geninfo_all_blocks=1 00:05:10.172 --rc geninfo_unexecuted_blocks=1 00:05:10.172 00:05:10.172 ' 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.172 --rc genhtml_branch_coverage=1 00:05:10.172 --rc genhtml_function_coverage=1 00:05:10.172 --rc genhtml_legend=1 00:05:10.172 --rc geninfo_all_blocks=1 00:05:10.172 --rc geninfo_unexecuted_blocks=1 00:05:10.172 00:05:10.172 ' 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.172 --rc genhtml_branch_coverage=1 00:05:10.172 --rc genhtml_function_coverage=1 00:05:10.172 --rc genhtml_legend=1 00:05:10.172 --rc geninfo_all_blocks=1 00:05:10.172 --rc geninfo_unexecuted_blocks=1 00:05:10.172 00:05:10.172 ' 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58354 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58354 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58354 ']' 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.172 21:36:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.172 21:36:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.430 [2024-09-29 21:36:29.214207] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:10.430 [2024-09-29 21:36:29.214326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58354 ] 00:05:10.430 [2024-09-29 21:36:29.361463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.688 [2024-09-29 21:36:29.507767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.688 [2024-09-29 21:36:29.507855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.324 21:36:30 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.324 21:36:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:11.324 21:36:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58371 00:05:11.324 21:36:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.324 21:36:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.324 [ 00:05:11.324 "bdev_malloc_delete", 00:05:11.324 "bdev_malloc_create", 00:05:11.324 "bdev_null_resize", 00:05:11.324 "bdev_null_delete", 00:05:11.324 "bdev_null_create", 00:05:11.324 "bdev_nvme_cuse_unregister", 00:05:11.324 "bdev_nvme_cuse_register", 00:05:11.324 "bdev_opal_new_user", 00:05:11.324 "bdev_opal_set_lock_state", 00:05:11.324 "bdev_opal_delete", 00:05:11.324 "bdev_opal_get_info", 00:05:11.324 "bdev_opal_create", 00:05:11.324 "bdev_nvme_opal_revert", 00:05:11.324 "bdev_nvme_opal_init", 00:05:11.324 "bdev_nvme_send_cmd", 00:05:11.324 "bdev_nvme_set_keys", 00:05:11.324 "bdev_nvme_get_path_iostat", 00:05:11.324 "bdev_nvme_get_mdns_discovery_info", 00:05:11.324 "bdev_nvme_stop_mdns_discovery", 00:05:11.324 "bdev_nvme_start_mdns_discovery", 00:05:11.324 "bdev_nvme_set_multipath_policy", 00:05:11.324 "bdev_nvme_set_preferred_path", 00:05:11.324 "bdev_nvme_get_io_paths", 00:05:11.324 "bdev_nvme_remove_error_injection", 00:05:11.324 "bdev_nvme_add_error_injection", 00:05:11.324 "bdev_nvme_get_discovery_info", 00:05:11.324 "bdev_nvme_stop_discovery", 00:05:11.324 "bdev_nvme_start_discovery", 00:05:11.324 "bdev_nvme_get_controller_health_info", 00:05:11.324 "bdev_nvme_disable_controller", 00:05:11.324 "bdev_nvme_enable_controller", 00:05:11.324 "bdev_nvme_reset_controller", 00:05:11.324 "bdev_nvme_get_transport_statistics", 00:05:11.324 "bdev_nvme_apply_firmware", 00:05:11.324 "bdev_nvme_detach_controller", 00:05:11.324 "bdev_nvme_get_controllers", 00:05:11.324 "bdev_nvme_attach_controller", 00:05:11.324 "bdev_nvme_set_hotplug", 00:05:11.324 "bdev_nvme_set_options", 00:05:11.324 "bdev_passthru_delete", 00:05:11.324 "bdev_passthru_create", 00:05:11.324 "bdev_lvol_set_parent_bdev", 00:05:11.324 "bdev_lvol_set_parent", 00:05:11.324 "bdev_lvol_check_shallow_copy", 00:05:11.324 "bdev_lvol_start_shallow_copy", 00:05:11.324 "bdev_lvol_grow_lvstore", 00:05:11.324 "bdev_lvol_get_lvols", 00:05:11.324 "bdev_lvol_get_lvstores", 00:05:11.324 "bdev_lvol_delete", 00:05:11.324 "bdev_lvol_set_read_only", 00:05:11.324 "bdev_lvol_resize", 00:05:11.324 "bdev_lvol_decouple_parent", 00:05:11.324 "bdev_lvol_inflate", 00:05:11.324 "bdev_lvol_rename", 00:05:11.324 "bdev_lvol_clone_bdev", 00:05:11.324 "bdev_lvol_clone", 00:05:11.324 "bdev_lvol_snapshot", 00:05:11.324 "bdev_lvol_create", 00:05:11.324 "bdev_lvol_delete_lvstore", 00:05:11.324 "bdev_lvol_rename_lvstore", 00:05:11.324 "bdev_lvol_create_lvstore", 00:05:11.324 "bdev_raid_set_options", 00:05:11.324 "bdev_raid_remove_base_bdev", 00:05:11.324 "bdev_raid_add_base_bdev", 00:05:11.324 "bdev_raid_delete", 00:05:11.324 "bdev_raid_create", 00:05:11.324 "bdev_raid_get_bdevs", 00:05:11.324 "bdev_error_inject_error", 00:05:11.324 "bdev_error_delete", 00:05:11.324 "bdev_error_create", 00:05:11.324 "bdev_split_delete", 00:05:11.324 "bdev_split_create", 00:05:11.324 "bdev_delay_delete", 00:05:11.324 "bdev_delay_create", 00:05:11.324 "bdev_delay_update_latency", 00:05:11.324 "bdev_zone_block_delete", 00:05:11.324 "bdev_zone_block_create", 00:05:11.324 "blobfs_create", 00:05:11.324 "blobfs_detect", 00:05:11.324 "blobfs_set_cache_size", 00:05:11.324 "bdev_xnvme_delete", 00:05:11.324 "bdev_xnvme_create", 00:05:11.324 "bdev_aio_delete", 00:05:11.324 "bdev_aio_rescan", 00:05:11.324 "bdev_aio_create", 00:05:11.324 "bdev_ftl_set_property", 00:05:11.324 "bdev_ftl_get_properties", 00:05:11.324 "bdev_ftl_get_stats", 00:05:11.324 "bdev_ftl_unmap", 00:05:11.324 "bdev_ftl_unload", 00:05:11.324 "bdev_ftl_delete", 00:05:11.324 "bdev_ftl_load", 00:05:11.324 "bdev_ftl_create", 00:05:11.324 "bdev_virtio_attach_controller", 00:05:11.324 "bdev_virtio_scsi_get_devices", 00:05:11.324 "bdev_virtio_detach_controller", 00:05:11.324 "bdev_virtio_blk_set_hotplug", 00:05:11.324 "bdev_iscsi_delete", 00:05:11.324 "bdev_iscsi_create", 00:05:11.324 "bdev_iscsi_set_options", 00:05:11.324 "accel_error_inject_error", 00:05:11.324 "ioat_scan_accel_module", 00:05:11.324 "dsa_scan_accel_module", 00:05:11.324 "iaa_scan_accel_module", 00:05:11.324 "keyring_file_remove_key", 00:05:11.324 "keyring_file_add_key", 00:05:11.324 "keyring_linux_set_options", 00:05:11.324 "fsdev_aio_delete", 00:05:11.324 "fsdev_aio_create", 00:05:11.324 "iscsi_get_histogram", 00:05:11.324 "iscsi_enable_histogram", 00:05:11.324 "iscsi_set_options", 00:05:11.324 "iscsi_get_auth_groups", 00:05:11.324 "iscsi_auth_group_remove_secret", 00:05:11.324 "iscsi_auth_group_add_secret", 00:05:11.324 "iscsi_delete_auth_group", 00:05:11.324 "iscsi_create_auth_group", 00:05:11.324 "iscsi_set_discovery_auth", 00:05:11.324 "iscsi_get_options", 00:05:11.324 "iscsi_target_node_request_logout", 00:05:11.324 "iscsi_target_node_set_redirect", 00:05:11.324 "iscsi_target_node_set_auth", 00:05:11.324 "iscsi_target_node_add_lun", 00:05:11.324 "iscsi_get_stats", 00:05:11.324 "iscsi_get_connections", 00:05:11.324 "iscsi_portal_group_set_auth", 00:05:11.324 "iscsi_start_portal_group", 00:05:11.324 "iscsi_delete_portal_group", 00:05:11.324 "iscsi_create_portal_group", 00:05:11.324 "iscsi_get_portal_groups", 00:05:11.324 "iscsi_delete_target_node", 00:05:11.324 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.324 "iscsi_target_node_add_pg_ig_maps", 00:05:11.324 "iscsi_create_target_node", 00:05:11.324 "iscsi_get_target_nodes", 00:05:11.324 "iscsi_delete_initiator_group", 00:05:11.324 "iscsi_initiator_group_remove_initiators", 00:05:11.324 "iscsi_initiator_group_add_initiators", 00:05:11.324 "iscsi_create_initiator_group", 00:05:11.324 "iscsi_get_initiator_groups", 00:05:11.324 "nvmf_set_crdt", 00:05:11.324 "nvmf_set_config", 00:05:11.324 "nvmf_set_max_subsystems", 00:05:11.324 "nvmf_stop_mdns_prr", 00:05:11.324 "nvmf_publish_mdns_prr", 00:05:11.324 "nvmf_subsystem_get_listeners", 00:05:11.324 "nvmf_subsystem_get_qpairs", 00:05:11.324 "nvmf_subsystem_get_controllers", 00:05:11.324 "nvmf_get_stats", 00:05:11.324 "nvmf_get_transports", 00:05:11.324 "nvmf_create_transport", 00:05:11.324 "nvmf_get_targets", 00:05:11.324 "nvmf_delete_target", 00:05:11.324 "nvmf_create_target", 00:05:11.324 "nvmf_subsystem_allow_any_host", 00:05:11.324 "nvmf_subsystem_set_keys", 00:05:11.324 "nvmf_subsystem_remove_host", 00:05:11.325 "nvmf_subsystem_add_host", 00:05:11.325 "nvmf_ns_remove_host", 00:05:11.325 "nvmf_ns_add_host", 00:05:11.325 "nvmf_subsystem_remove_ns", 00:05:11.325 "nvmf_subsystem_set_ns_ana_group", 00:05:11.325 "nvmf_subsystem_add_ns", 00:05:11.325 "nvmf_subsystem_listener_set_ana_state", 00:05:11.325 "nvmf_discovery_get_referrals", 00:05:11.325 "nvmf_discovery_remove_referral", 00:05:11.325 "nvmf_discovery_add_referral", 00:05:11.325 "nvmf_subsystem_remove_listener", 00:05:11.325 "nvmf_subsystem_add_listener", 00:05:11.325 "nvmf_delete_subsystem", 00:05:11.325 "nvmf_create_subsystem", 00:05:11.325 "nvmf_get_subsystems", 00:05:11.325 "env_dpdk_get_mem_stats", 00:05:11.325 "nbd_get_disks", 00:05:11.325 "nbd_stop_disk", 00:05:11.325 "nbd_start_disk", 00:05:11.325 "ublk_recover_disk", 00:05:11.325 "ublk_get_disks", 00:05:11.325 "ublk_stop_disk", 00:05:11.325 "ublk_start_disk", 00:05:11.325 "ublk_destroy_target", 00:05:11.325 "ublk_create_target", 00:05:11.325 "virtio_blk_create_transport", 00:05:11.325 "virtio_blk_get_transports", 00:05:11.325 "vhost_controller_set_coalescing", 00:05:11.325 "vhost_get_controllers", 00:05:11.325 "vhost_delete_controller", 00:05:11.325 "vhost_create_blk_controller", 00:05:11.325 "vhost_scsi_controller_remove_target", 00:05:11.325 "vhost_scsi_controller_add_target", 00:05:11.325 "vhost_start_scsi_controller", 00:05:11.325 "vhost_create_scsi_controller", 00:05:11.325 "thread_set_cpumask", 00:05:11.325 "scheduler_set_options", 00:05:11.325 "framework_get_governor", 00:05:11.325 "framework_get_scheduler", 00:05:11.325 "framework_set_scheduler", 00:05:11.325 "framework_get_reactors", 00:05:11.325 "thread_get_io_channels", 00:05:11.325 "thread_get_pollers", 00:05:11.325 "thread_get_stats", 00:05:11.325 "framework_monitor_context_switch", 00:05:11.325 "spdk_kill_instance", 00:05:11.325 "log_enable_timestamps", 00:05:11.325 "log_get_flags", 00:05:11.325 "log_clear_flag", 00:05:11.325 "log_set_flag", 00:05:11.325 "log_get_level", 00:05:11.325 "log_set_level", 00:05:11.325 "log_get_print_level", 00:05:11.325 "log_set_print_level", 00:05:11.325 "framework_enable_cpumask_locks", 00:05:11.325 "framework_disable_cpumask_locks", 00:05:11.325 "framework_wait_init", 00:05:11.325 "framework_start_init", 00:05:11.325 "scsi_get_devices", 00:05:11.325 "bdev_get_histogram", 00:05:11.325 "bdev_enable_histogram", 00:05:11.325 "bdev_set_qos_limit", 00:05:11.325 "bdev_set_qd_sampling_period", 00:05:11.325 "bdev_get_bdevs", 00:05:11.325 "bdev_reset_iostat", 00:05:11.325 "bdev_get_iostat", 00:05:11.325 "bdev_examine", 00:05:11.325 "bdev_wait_for_examine", 00:05:11.325 "bdev_set_options", 00:05:11.325 "accel_get_stats", 00:05:11.325 "accel_set_options", 00:05:11.325 "accel_set_driver", 00:05:11.325 "accel_crypto_key_destroy", 00:05:11.325 "accel_crypto_keys_get", 00:05:11.325 "accel_crypto_key_create", 00:05:11.325 "accel_assign_opc", 00:05:11.325 "accel_get_module_info", 00:05:11.325 "accel_get_opc_assignments", 00:05:11.325 "vmd_rescan", 00:05:11.325 "vmd_remove_device", 00:05:11.325 "vmd_enable", 00:05:11.325 "sock_get_default_impl", 00:05:11.325 "sock_set_default_impl", 00:05:11.325 "sock_impl_set_options", 00:05:11.325 "sock_impl_get_options", 00:05:11.325 "iobuf_get_stats", 00:05:11.325 "iobuf_set_options", 00:05:11.325 "keyring_get_keys", 00:05:11.325 "framework_get_pci_devices", 00:05:11.325 "framework_get_config", 00:05:11.325 "framework_get_subsystems", 00:05:11.325 "fsdev_set_opts", 00:05:11.325 "fsdev_get_opts", 00:05:11.325 "trace_get_info", 00:05:11.325 "trace_get_tpoint_group_mask", 00:05:11.325 "trace_disable_tpoint_group", 00:05:11.325 "trace_enable_tpoint_group", 00:05:11.325 "trace_clear_tpoint_mask", 00:05:11.325 "trace_set_tpoint_mask", 00:05:11.325 "notify_get_notifications", 00:05:11.325 "notify_get_types", 00:05:11.325 "spdk_get_version", 00:05:11.325 "rpc_get_methods" 00:05:11.325 ] 00:05:11.325 21:36:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.325 21:36:30 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.325 21:36:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.325 21:36:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.325 21:36:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58354 00:05:11.325 21:36:30 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58354 ']' 00:05:11.325 21:36:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58354 00:05:11.325 21:36:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:11.325 21:36:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.325 21:36:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58354 00:05:11.583 21:36:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.583 21:36:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.583 killing process with pid 58354 00:05:11.583 21:36:30 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58354' 00:05:11.583 21:36:30 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58354 00:05:11.583 21:36:30 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58354 00:05:12.960 00:05:12.960 real 0m2.564s 00:05:12.960 user 0m4.507s 00:05:12.960 sys 0m0.402s 00:05:12.960 21:36:31 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.960 21:36:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.960 ************************************ 00:05:12.960 END TEST spdkcli_tcp 00:05:12.960 ************************************ 00:05:12.960 21:36:31 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.960 21:36:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.960 21:36:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.960 21:36:31 -- common/autotest_common.sh@10 -- # set +x 00:05:12.960 ************************************ 00:05:12.960 START TEST dpdk_mem_utility 00:05:12.960 ************************************ 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.960 * Looking for test storage... 00:05:12.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.960 21:36:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.960 --rc genhtml_branch_coverage=1 00:05:12.960 --rc genhtml_function_coverage=1 00:05:12.960 --rc genhtml_legend=1 00:05:12.960 --rc geninfo_all_blocks=1 00:05:12.960 --rc geninfo_unexecuted_blocks=1 00:05:12.960 00:05:12.960 ' 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.960 --rc genhtml_branch_coverage=1 00:05:12.960 --rc genhtml_function_coverage=1 00:05:12.960 --rc genhtml_legend=1 00:05:12.960 --rc geninfo_all_blocks=1 00:05:12.960 --rc geninfo_unexecuted_blocks=1 00:05:12.960 00:05:12.960 ' 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.960 --rc genhtml_branch_coverage=1 00:05:12.960 --rc genhtml_function_coverage=1 00:05:12.960 --rc genhtml_legend=1 00:05:12.960 --rc geninfo_all_blocks=1 00:05:12.960 --rc geninfo_unexecuted_blocks=1 00:05:12.960 00:05:12.960 ' 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.960 --rc genhtml_branch_coverage=1 00:05:12.960 --rc genhtml_function_coverage=1 00:05:12.960 --rc genhtml_legend=1 00:05:12.960 --rc geninfo_all_blocks=1 00:05:12.960 --rc geninfo_unexecuted_blocks=1 00:05:12.960 00:05:12.960 ' 00:05:12.960 21:36:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.960 21:36:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58459 00:05:12.960 21:36:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58459 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58459 ']' 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.960 21:36:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.960 21:36:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.960 [2024-09-29 21:36:31.823134] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:12.960 [2024-09-29 21:36:31.823422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58459 ] 00:05:13.221 [2024-09-29 21:36:31.973375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.221 [2024-09-29 21:36:32.120010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.791 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.791 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:13.791 21:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.791 21:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.791 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.791 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.791 { 00:05:13.791 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.791 } 00:05:13.791 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.791 21:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:13.791 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:13.791 1 heaps totaling size 866.000000 MiB 00:05:13.791 size: 866.000000 MiB heap id: 0 00:05:13.791 end heaps---------- 00:05:13.791 9 mempools totaling size 642.649841 MiB 00:05:13.791 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.791 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.791 size: 92.545471 MiB name: bdev_io_58459 00:05:13.791 size: 51.011292 MiB name: evtpool_58459 00:05:13.791 size: 50.003479 MiB name: msgpool_58459 00:05:13.791 size: 36.509338 MiB name: fsdev_io_58459 00:05:13.791 size: 21.763794 MiB name: PDU_Pool 00:05:13.791 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.791 size: 0.026123 MiB name: Session_Pool 00:05:13.791 end mempools------- 00:05:13.791 6 memzones totaling size 4.142822 MiB 00:05:13.791 size: 1.000366 MiB name: RG_ring_0_58459 00:05:13.791 size: 1.000366 MiB name: RG_ring_1_58459 00:05:13.791 size: 1.000366 MiB name: RG_ring_4_58459 00:05:13.791 size: 1.000366 MiB name: RG_ring_5_58459 00:05:13.791 size: 0.125366 MiB name: RG_ring_2_58459 00:05:13.791 size: 0.015991 MiB name: RG_ring_3_58459 00:05:13.791 end memzones------- 00:05:13.791 21:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.791 heap id: 0 total size: 866.000000 MiB number of busy elements: 315 number of free elements: 19 00:05:13.791 list of free elements. size: 19.913574 MiB 00:05:13.791 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:13.791 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:13.791 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:13.791 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:13.791 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:13.791 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:13.791 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:13.791 element at address: 0x20001c400000 with size: 0.999084 MiB 00:05:13.791 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:13.791 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:05:13.791 element at address: 0x20001c700040 with size: 0.936401 MiB 00:05:13.791 element at address: 0x200000200000 with size: 0.832153 MiB 00:05:13.791 element at address: 0x20001de00000 with size: 0.561218 MiB 00:05:13.791 element at address: 0x200003e00000 with size: 0.491150 MiB 00:05:13.791 element at address: 0x20001c000000 with size: 0.487976 MiB 00:05:13.791 element at address: 0x20001c800000 with size: 0.485413 MiB 00:05:13.791 element at address: 0x200015e00000 with size: 0.443237 MiB 00:05:13.791 element at address: 0x20002b200000 with size: 0.390930 MiB 00:05:13.791 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:13.791 list of standard malloc elements. size: 199.287720 MiB 00:05:13.791 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:13.791 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:13.791 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:13.791 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:13.791 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:13.791 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:13.791 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:13.791 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:13.791 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:05:13.791 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:05:13.791 element at address: 0x200015dff040 with size: 0.000305 MiB 00:05:13.791 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:13.791 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003aff700 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:05:13.791 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dff180 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dff280 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dff380 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dff480 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dff580 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dff680 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dff780 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dff880 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dff980 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e71780 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e71880 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e71980 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e72080 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015e72180 with size: 0.000244 MiB 00:05:13.792 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07cec0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07cfc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d0c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de8fac0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de8fbc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de8fcc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de8fdc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20002b264140 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20002b264240 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20002b26af00 with size: 0.000244 MiB 00:05:13.792 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:05:13.793 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:05:13.793 list of memzone associated elements. size: 646.798706 MiB 00:05:13.793 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:13.793 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.793 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:13.793 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.793 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:13.793 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58459_0 00:05:13.793 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:13.793 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58459_0 00:05:13.793 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:13.793 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58459_0 00:05:13.793 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:13.793 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58459_0 00:05:13.793 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:13.793 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.793 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:13.793 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.793 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:13.793 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58459 00:05:13.793 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:13.793 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58459 00:05:13.793 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:13.793 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58459 00:05:13.793 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:13.793 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.793 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:13.793 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.793 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:13.793 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.793 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:13.793 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.793 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:13.793 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58459 00:05:13.793 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:13.793 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58459 00:05:13.793 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:13.793 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58459 00:05:13.793 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:13.793 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58459 00:05:13.793 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:05:13.793 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58459 00:05:13.793 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:05:13.793 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58459 00:05:13.793 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:05:13.793 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.793 element at address: 0x200015e72280 with size: 0.500549 MiB 00:05:13.793 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.793 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:05:13.793 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.793 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:05:13.793 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58459 00:05:13.793 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:05:13.793 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.793 element at address: 0x20002b264340 with size: 0.023804 MiB 00:05:13.793 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.793 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:05:13.793 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58459 00:05:13.793 element at address: 0x20002b26a4c0 with size: 0.002502 MiB 00:05:13.793 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.793 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:05:13.793 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58459 00:05:13.793 element at address: 0x200003aff800 with size: 0.000366 MiB 00:05:13.793 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58459 00:05:13.793 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:13.793 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58459 00:05:13.793 element at address: 0x20002b26b000 with size: 0.000366 MiB 00:05:13.793 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.793 21:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.793 21:36:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58459 00:05:13.793 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58459 ']' 00:05:13.793 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58459 00:05:13.793 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:13.794 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.794 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58459 00:05:14.052 killing process with pid 58459 00:05:14.052 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.052 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.052 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58459' 00:05:14.052 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58459 00:05:14.052 21:36:32 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58459 00:05:15.427 00:05:15.427 real 0m2.652s 00:05:15.427 user 0m2.644s 00:05:15.427 sys 0m0.409s 00:05:15.427 21:36:34 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.427 ************************************ 00:05:15.427 21:36:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.427 END TEST dpdk_mem_utility 00:05:15.427 ************************************ 00:05:15.427 21:36:34 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.427 21:36:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.427 21:36:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.427 21:36:34 -- common/autotest_common.sh@10 -- # set +x 00:05:15.427 ************************************ 00:05:15.427 START TEST event 00:05:15.427 ************************************ 00:05:15.427 21:36:34 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.427 * Looking for test storage... 00:05:15.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:15.427 21:36:34 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:15.427 21:36:34 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:15.427 21:36:34 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:15.687 21:36:34 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:15.687 21:36:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.687 21:36:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.687 21:36:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.687 21:36:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.687 21:36:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.687 21:36:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.687 21:36:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.687 21:36:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.687 21:36:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.687 21:36:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.687 21:36:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.687 21:36:34 event -- scripts/common.sh@344 -- # case "$op" in 00:05:15.687 21:36:34 event -- scripts/common.sh@345 -- # : 1 00:05:15.687 21:36:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.687 21:36:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.687 21:36:34 event -- scripts/common.sh@365 -- # decimal 1 00:05:15.687 21:36:34 event -- scripts/common.sh@353 -- # local d=1 00:05:15.687 21:36:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.687 21:36:34 event -- scripts/common.sh@355 -- # echo 1 00:05:15.687 21:36:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.687 21:36:34 event -- scripts/common.sh@366 -- # decimal 2 00:05:15.687 21:36:34 event -- scripts/common.sh@353 -- # local d=2 00:05:15.687 21:36:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.687 21:36:34 event -- scripts/common.sh@355 -- # echo 2 00:05:15.687 21:36:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.687 21:36:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.687 21:36:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.687 21:36:34 event -- scripts/common.sh@368 -- # return 0 00:05:15.687 21:36:34 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.687 21:36:34 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:15.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.687 --rc genhtml_branch_coverage=1 00:05:15.687 --rc genhtml_function_coverage=1 00:05:15.687 --rc genhtml_legend=1 00:05:15.687 --rc geninfo_all_blocks=1 00:05:15.687 --rc geninfo_unexecuted_blocks=1 00:05:15.687 00:05:15.687 ' 00:05:15.687 21:36:34 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:15.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.687 --rc genhtml_branch_coverage=1 00:05:15.687 --rc genhtml_function_coverage=1 00:05:15.687 --rc genhtml_legend=1 00:05:15.687 --rc geninfo_all_blocks=1 00:05:15.687 --rc geninfo_unexecuted_blocks=1 00:05:15.687 00:05:15.687 ' 00:05:15.687 21:36:34 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:15.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.687 --rc genhtml_branch_coverage=1 00:05:15.687 --rc genhtml_function_coverage=1 00:05:15.687 --rc genhtml_legend=1 00:05:15.687 --rc geninfo_all_blocks=1 00:05:15.687 --rc geninfo_unexecuted_blocks=1 00:05:15.687 00:05:15.687 ' 00:05:15.687 21:36:34 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:15.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.687 --rc genhtml_branch_coverage=1 00:05:15.687 --rc genhtml_function_coverage=1 00:05:15.687 --rc genhtml_legend=1 00:05:15.687 --rc geninfo_all_blocks=1 00:05:15.687 --rc geninfo_unexecuted_blocks=1 00:05:15.687 00:05:15.687 ' 00:05:15.687 21:36:34 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:15.687 21:36:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.687 21:36:34 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.687 21:36:34 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:15.687 21:36:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.687 21:36:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.687 ************************************ 00:05:15.687 START TEST event_perf 00:05:15.687 ************************************ 00:05:15.687 21:36:34 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.687 Running I/O for 1 seconds...[2024-09-29 21:36:34.471202] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:15.687 [2024-09-29 21:36:34.471394] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58551 ] 00:05:15.687 [2024-09-29 21:36:34.621347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.945 [2024-09-29 21:36:34.802097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.946 [2024-09-29 21:36:34.802750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.946 [2024-09-29 21:36:34.802819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.946 Running I/O for 1 seconds...[2024-09-29 21:36:34.802835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.319 00:05:17.319 lcore 0: 204916 00:05:17.319 lcore 1: 204912 00:05:17.319 lcore 2: 204911 00:05:17.319 lcore 3: 204913 00:05:17.319 done. 00:05:17.319 00:05:17.319 real 0m1.627s 00:05:17.319 user 0m4.410s 00:05:17.319 sys 0m0.097s 00:05:17.319 21:36:36 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.319 21:36:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.319 ************************************ 00:05:17.319 END TEST event_perf 00:05:17.319 ************************************ 00:05:17.319 21:36:36 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:17.319 21:36:36 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:17.319 21:36:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.319 21:36:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.319 ************************************ 00:05:17.319 START TEST event_reactor 00:05:17.319 ************************************ 00:05:17.319 21:36:36 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:17.319 [2024-09-29 21:36:36.133519] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:17.319 [2024-09-29 21:36:36.133712] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58596 ] 00:05:17.319 [2024-09-29 21:36:36.283506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.578 [2024-09-29 21:36:36.467808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.952 test_start 00:05:18.952 oneshot 00:05:18.952 tick 100 00:05:18.952 tick 100 00:05:18.952 tick 250 00:05:18.952 tick 100 00:05:18.952 tick 100 00:05:18.952 tick 100 00:05:18.952 tick 250 00:05:18.952 tick 500 00:05:18.952 tick 100 00:05:18.952 tick 100 00:05:18.952 tick 250 00:05:18.952 tick 100 00:05:18.952 tick 100 00:05:18.952 test_end 00:05:18.952 00:05:18.952 real 0m1.605s 00:05:18.952 user 0m1.423s 00:05:18.952 sys 0m0.072s 00:05:18.952 21:36:37 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.952 21:36:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.952 ************************************ 00:05:18.952 END TEST event_reactor 00:05:18.952 ************************************ 00:05:18.952 21:36:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.952 21:36:37 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:18.952 21:36:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.952 21:36:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.952 ************************************ 00:05:18.952 START TEST event_reactor_perf 00:05:18.952 ************************************ 00:05:18.952 21:36:37 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.952 [2024-09-29 21:36:37.784677] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:18.952 [2024-09-29 21:36:37.784787] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58627 ] 00:05:18.952 [2024-09-29 21:36:37.931175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.211 [2024-09-29 21:36:38.080952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.632 test_start 00:05:20.632 test_end 00:05:20.632 Performance: 406381 events per second 00:05:20.632 00:05:20.632 real 0m1.527s 00:05:20.632 user 0m1.352s 00:05:20.632 sys 0m0.067s 00:05:20.632 ************************************ 00:05:20.632 END TEST event_reactor_perf 00:05:20.632 ************************************ 00:05:20.632 21:36:39 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.632 21:36:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.632 21:36:39 event -- event/event.sh@49 -- # uname -s 00:05:20.632 21:36:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:20.632 21:36:39 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:20.632 21:36:39 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.632 21:36:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.632 21:36:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.632 ************************************ 00:05:20.632 START TEST event_scheduler 00:05:20.632 ************************************ 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:20.632 * Looking for test storage... 00:05:20.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:20.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.632 21:36:39 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:20.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.632 --rc genhtml_branch_coverage=1 00:05:20.632 --rc genhtml_function_coverage=1 00:05:20.632 --rc genhtml_legend=1 00:05:20.632 --rc geninfo_all_blocks=1 00:05:20.632 --rc geninfo_unexecuted_blocks=1 00:05:20.632 00:05:20.632 ' 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:20.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.632 --rc genhtml_branch_coverage=1 00:05:20.632 --rc genhtml_function_coverage=1 00:05:20.632 --rc genhtml_legend=1 00:05:20.632 --rc geninfo_all_blocks=1 00:05:20.632 --rc geninfo_unexecuted_blocks=1 00:05:20.632 00:05:20.632 ' 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:20.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.632 --rc genhtml_branch_coverage=1 00:05:20.632 --rc genhtml_function_coverage=1 00:05:20.632 --rc genhtml_legend=1 00:05:20.632 --rc geninfo_all_blocks=1 00:05:20.632 --rc geninfo_unexecuted_blocks=1 00:05:20.632 00:05:20.632 ' 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:20.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.632 --rc genhtml_branch_coverage=1 00:05:20.632 --rc genhtml_function_coverage=1 00:05:20.632 --rc genhtml_legend=1 00:05:20.632 --rc geninfo_all_blocks=1 00:05:20.632 --rc geninfo_unexecuted_blocks=1 00:05:20.632 00:05:20.632 ' 00:05:20.632 21:36:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:20.632 21:36:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58703 00:05:20.632 21:36:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.632 21:36:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58703 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58703 ']' 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.632 21:36:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.632 21:36:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:20.632 [2024-09-29 21:36:39.522936] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:20.632 [2024-09-29 21:36:39.523230] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58703 ] 00:05:20.891 [2024-09-29 21:36:39.670706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.891 [2024-09-29 21:36:39.856618] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.891 [2024-09-29 21:36:39.856856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.891 [2024-09-29 21:36:39.857069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.891 [2024-09-29 21:36:39.857232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.457 21:36:40 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.457 21:36:40 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:21.457 21:36:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:21.457 21:36:40 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.457 21:36:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.457 POWER: Cannot set governor of lcore 0 to userspace 00:05:21.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.457 POWER: Cannot set governor of lcore 0 to performance 00:05:21.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.457 POWER: Cannot set governor of lcore 0 to userspace 00:05:21.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.457 POWER: Cannot set governor of lcore 0 to userspace 00:05:21.457 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:21.457 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:21.457 POWER: Unable to set Power Management Environment for lcore 0 00:05:21.457 [2024-09-29 21:36:40.367359] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:21.457 [2024-09-29 21:36:40.367448] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:21.457 [2024-09-29 21:36:40.367494] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:21.457 [2024-09-29 21:36:40.367544] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:21.457 [2024-09-29 21:36:40.367582] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:21.457 [2024-09-29 21:36:40.367622] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:21.457 21:36:40 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.457 21:36:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:21.457 21:36:40 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.457 21:36:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 [2024-09-29 21:36:40.586478] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:21.716 21:36:40 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:21.716 21:36:40 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.716 21:36:40 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 ************************************ 00:05:21.716 START TEST scheduler_create_thread 00:05:21.716 ************************************ 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 2 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 3 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 4 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 5 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 6 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 7 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 8 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 9 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 10 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.716 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.974 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:21.974 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.974 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.974 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.974 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:21.974 21:36:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:21.974 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.974 21:36:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.912 ************************************ 00:05:22.912 END TEST scheduler_create_thread 00:05:22.912 ************************************ 00:05:22.912 21:36:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.912 00:05:22.912 real 0m1.172s 00:05:22.912 user 0m0.013s 00:05:22.912 sys 0m0.004s 00:05:22.912 21:36:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.912 21:36:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.912 21:36:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:22.912 21:36:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58703 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58703 ']' 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58703 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58703 00:05:22.912 killing process with pid 58703 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58703' 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58703 00:05:22.912 21:36:41 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58703 00:05:23.482 [2024-09-29 21:36:42.252154] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:24.425 00:05:24.425 real 0m3.766s 00:05:24.425 user 0m5.891s 00:05:24.425 sys 0m0.333s 00:05:24.425 21:36:43 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.425 ************************************ 00:05:24.425 END TEST event_scheduler 00:05:24.425 ************************************ 00:05:24.425 21:36:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.425 21:36:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:24.425 21:36:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:24.425 21:36:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.425 21:36:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.425 21:36:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.425 ************************************ 00:05:24.425 START TEST app_repeat 00:05:24.425 ************************************ 00:05:24.425 21:36:43 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58792 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.425 Process app_repeat pid: 58792 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58792' 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.425 spdk_app_start Round 0 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:24.425 21:36:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58792 /var/tmp/spdk-nbd.sock 00:05:24.425 21:36:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58792 ']' 00:05:24.425 21:36:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.425 21:36:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.425 21:36:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.425 21:36:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.425 21:36:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.425 [2024-09-29 21:36:43.169163] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:24.425 [2024-09-29 21:36:43.169274] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58792 ] 00:05:24.425 [2024-09-29 21:36:43.319956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.684 [2024-09-29 21:36:43.499132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.684 [2024-09-29 21:36:43.499166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.252 21:36:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.252 21:36:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:25.252 21:36:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.510 Malloc0 00:05:25.510 21:36:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.768 Malloc1 00:05:25.768 21:36:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.768 /dev/nbd0 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.768 21:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.768 21:36:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:25.768 21:36:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.768 21:36:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.768 21:36:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.768 21:36:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:25.768 21:36:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.768 21:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.768 21:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.768 21:36:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.027 1+0 records in 00:05:26.027 1+0 records out 00:05:26.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242948 s, 16.9 MB/s 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:26.027 21:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.027 21:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.027 21:36:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.027 /dev/nbd1 00:05:26.027 21:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.027 21:36:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.027 1+0 records in 00:05:26.027 1+0 records out 00:05:26.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306694 s, 13.4 MB/s 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:26.027 21:36:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:26.027 21:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.027 21:36:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.027 21:36:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.027 21:36:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.027 21:36:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.286 { 00:05:26.286 "nbd_device": "/dev/nbd0", 00:05:26.286 "bdev_name": "Malloc0" 00:05:26.286 }, 00:05:26.286 { 00:05:26.286 "nbd_device": "/dev/nbd1", 00:05:26.286 "bdev_name": "Malloc1" 00:05:26.286 } 00:05:26.286 ]' 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.286 { 00:05:26.286 "nbd_device": "/dev/nbd0", 00:05:26.286 "bdev_name": "Malloc0" 00:05:26.286 }, 00:05:26.286 { 00:05:26.286 "nbd_device": "/dev/nbd1", 00:05:26.286 "bdev_name": "Malloc1" 00:05:26.286 } 00:05:26.286 ]' 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.286 /dev/nbd1' 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.286 /dev/nbd1' 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.286 256+0 records in 00:05:26.286 256+0 records out 00:05:26.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112937 s, 92.8 MB/s 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.286 21:36:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.545 256+0 records in 00:05:26.545 256+0 records out 00:05:26.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021946 s, 47.8 MB/s 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.545 256+0 records in 00:05:26.545 256+0 records out 00:05:26.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246437 s, 42.5 MB/s 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.545 21:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.803 21:36:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.803 21:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.803 21:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.803 21:36:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.804 21:36:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.062 21:36:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.062 21:36:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.628 21:36:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.195 [2024-09-29 21:36:47.109273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.454 [2024-09-29 21:36:47.281417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.454 [2024-09-29 21:36:47.281432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.454 [2024-09-29 21:36:47.409734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.454 [2024-09-29 21:36:47.409803] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.359 21:36:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.359 spdk_app_start Round 1 00:05:30.359 21:36:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.359 21:36:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58792 /var/tmp/spdk-nbd.sock 00:05:30.360 21:36:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58792 ']' 00:05:30.360 21:36:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.360 21:36:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.360 21:36:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.360 21:36:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.360 21:36:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.618 21:36:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.618 21:36:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:30.618 21:36:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.877 Malloc0 00:05:30.877 21:36:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.136 Malloc1 00:05:31.136 21:36:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.136 21:36:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.395 /dev/nbd0 00:05:31.395 21:36:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.395 21:36:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.395 1+0 records in 00:05:31.395 1+0 records out 00:05:31.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021101 s, 19.4 MB/s 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:31.395 21:36:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:31.395 21:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.395 21:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.395 21:36:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.654 /dev/nbd1 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.654 1+0 records in 00:05:31.654 1+0 records out 00:05:31.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241056 s, 17.0 MB/s 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:31.654 21:36:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.654 { 00:05:31.654 "nbd_device": "/dev/nbd0", 00:05:31.654 "bdev_name": "Malloc0" 00:05:31.654 }, 00:05:31.654 { 00:05:31.654 "nbd_device": "/dev/nbd1", 00:05:31.654 "bdev_name": "Malloc1" 00:05:31.654 } 00:05:31.654 ]' 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.654 { 00:05:31.654 "nbd_device": "/dev/nbd0", 00:05:31.654 "bdev_name": "Malloc0" 00:05:31.654 }, 00:05:31.654 { 00:05:31.654 "nbd_device": "/dev/nbd1", 00:05:31.654 "bdev_name": "Malloc1" 00:05:31.654 } 00:05:31.654 ]' 00:05:31.654 21:36:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.912 21:36:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.912 /dev/nbd1' 00:05:31.912 21:36:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.913 /dev/nbd1' 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.913 256+0 records in 00:05:31.913 256+0 records out 00:05:31.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472758 s, 222 MB/s 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.913 256+0 records in 00:05:31.913 256+0 records out 00:05:31.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187609 s, 55.9 MB/s 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.913 256+0 records in 00:05:31.913 256+0 records out 00:05:31.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204199 s, 51.4 MB/s 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.913 21:36:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.171 21:36:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.431 21:36:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.431 21:36:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.004 21:36:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.572 [2024-09-29 21:36:52.545132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.831 [2024-09-29 21:36:52.713918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.831 [2024-09-29 21:36:52.714027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.089 [2024-09-29 21:36:52.821948] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.089 [2024-09-29 21:36:52.822024] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.993 21:36:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.993 21:36:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.993 spdk_app_start Round 2 00:05:35.993 21:36:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58792 /var/tmp/spdk-nbd.sock 00:05:35.993 21:36:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58792 ']' 00:05:35.993 21:36:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.993 21:36:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.993 21:36:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.993 21:36:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.993 21:36:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.993 21:36:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.993 21:36:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:35.993 21:36:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.252 Malloc0 00:05:36.252 21:36:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.511 Malloc1 00:05:36.511 21:36:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.511 21:36:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.769 /dev/nbd0 00:05:36.769 21:36:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.770 21:36:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.770 1+0 records in 00:05:36.770 1+0 records out 00:05:36.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200741 s, 20.4 MB/s 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:36.770 21:36:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:36.770 21:36:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.770 21:36:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.770 21:36:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.029 /dev/nbd1 00:05:37.029 21:36:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.029 21:36:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.029 1+0 records in 00:05:37.029 1+0 records out 00:05:37.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221761 s, 18.5 MB/s 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.029 21:36:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.029 21:36:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.029 21:36:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.029 21:36:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.029 21:36:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.029 21:36:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.289 { 00:05:37.289 "nbd_device": "/dev/nbd0", 00:05:37.289 "bdev_name": "Malloc0" 00:05:37.289 }, 00:05:37.289 { 00:05:37.289 "nbd_device": "/dev/nbd1", 00:05:37.289 "bdev_name": "Malloc1" 00:05:37.289 } 00:05:37.289 ]' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.289 { 00:05:37.289 "nbd_device": "/dev/nbd0", 00:05:37.289 "bdev_name": "Malloc0" 00:05:37.289 }, 00:05:37.289 { 00:05:37.289 "nbd_device": "/dev/nbd1", 00:05:37.289 "bdev_name": "Malloc1" 00:05:37.289 } 00:05:37.289 ]' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.289 /dev/nbd1' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.289 /dev/nbd1' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.289 256+0 records in 00:05:37.289 256+0 records out 00:05:37.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00309754 s, 339 MB/s 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.289 256+0 records in 00:05:37.289 256+0 records out 00:05:37.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166428 s, 63.0 MB/s 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.289 256+0 records in 00:05:37.289 256+0 records out 00:05:37.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0186253 s, 56.3 MB/s 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.289 21:36:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.575 21:36:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.834 21:36:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.092 21:36:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.092 21:36:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.093 21:36:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.093 21:36:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.093 21:36:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.093 21:36:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.093 21:36:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.351 21:36:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.920 [2024-09-29 21:36:57.796013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.179 [2024-09-29 21:36:57.977845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.179 [2024-09-29 21:36:57.977966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.179 [2024-09-29 21:36:58.089372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.179 [2024-09-29 21:36:58.089463] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.709 21:37:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58792 /var/tmp/spdk-nbd.sock 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58792 ']' 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:41.709 21:37:00 event.app_repeat -- event/event.sh@39 -- # killprocess 58792 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58792 ']' 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58792 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58792 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.709 killing process with pid 58792 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58792' 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58792 00:05:41.709 21:37:00 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58792 00:05:42.279 spdk_app_start is called in Round 0. 00:05:42.279 Shutdown signal received, stop current app iteration 00:05:42.279 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:42.279 spdk_app_start is called in Round 1. 00:05:42.279 Shutdown signal received, stop current app iteration 00:05:42.279 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:42.279 spdk_app_start is called in Round 2. 00:05:42.279 Shutdown signal received, stop current app iteration 00:05:42.279 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:42.279 spdk_app_start is called in Round 3. 00:05:42.279 Shutdown signal received, stop current app iteration 00:05:42.279 21:37:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.279 21:37:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:42.279 00:05:42.279 real 0m17.863s 00:05:42.279 user 0m38.215s 00:05:42.279 sys 0m2.096s 00:05:42.279 21:37:00 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.279 21:37:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.279 ************************************ 00:05:42.279 END TEST app_repeat 00:05:42.279 ************************************ 00:05:42.279 21:37:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.279 21:37:01 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.279 21:37:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.279 21:37:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.279 21:37:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.279 ************************************ 00:05:42.279 START TEST cpu_locks 00:05:42.279 ************************************ 00:05:42.279 21:37:01 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.279 * Looking for test storage... 00:05:42.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.279 21:37:01 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.279 21:37:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.280 21:37:01 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.280 --rc genhtml_branch_coverage=1 00:05:42.280 --rc genhtml_function_coverage=1 00:05:42.280 --rc genhtml_legend=1 00:05:42.280 --rc geninfo_all_blocks=1 00:05:42.280 --rc geninfo_unexecuted_blocks=1 00:05:42.280 00:05:42.280 ' 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.280 --rc genhtml_branch_coverage=1 00:05:42.280 --rc genhtml_function_coverage=1 00:05:42.280 --rc genhtml_legend=1 00:05:42.280 --rc geninfo_all_blocks=1 00:05:42.280 --rc geninfo_unexecuted_blocks=1 00:05:42.280 00:05:42.280 ' 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.280 --rc genhtml_branch_coverage=1 00:05:42.280 --rc genhtml_function_coverage=1 00:05:42.280 --rc genhtml_legend=1 00:05:42.280 --rc geninfo_all_blocks=1 00:05:42.280 --rc geninfo_unexecuted_blocks=1 00:05:42.280 00:05:42.280 ' 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.280 --rc genhtml_branch_coverage=1 00:05:42.280 --rc genhtml_function_coverage=1 00:05:42.280 --rc genhtml_legend=1 00:05:42.280 --rc geninfo_all_blocks=1 00:05:42.280 --rc geninfo_unexecuted_blocks=1 00:05:42.280 00:05:42.280 ' 00:05:42.280 21:37:01 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.280 21:37:01 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.280 21:37:01 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.280 21:37:01 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.280 21:37:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.280 ************************************ 00:05:42.280 START TEST default_locks 00:05:42.280 ************************************ 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59223 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59223 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59223 ']' 00:05:42.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.280 21:37:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.540 [2024-09-29 21:37:01.273652] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:42.540 [2024-09-29 21:37:01.273779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59223 ] 00:05:42.540 [2024-09-29 21:37:01.422283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.797 [2024-09-29 21:37:01.595589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.363 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.363 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:43.363 21:37:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59223 00:05:43.363 21:37:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.363 21:37:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59223 00:05:43.363 21:37:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59223 00:05:43.363 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59223 ']' 00:05:43.363 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59223 00:05:43.363 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:43.620 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.620 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59223 00:05:43.620 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.620 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.620 killing process with pid 59223 00:05:43.620 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59223' 00:05:43.620 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59223 00:05:43.620 21:37:02 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59223 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59223 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59223 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59223 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59223 ']' 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.991 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59223) - No such process 00:05:44.991 ERROR: process (pid: 59223) is no longer running 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.991 00:05:44.991 real 0m2.521s 00:05:44.991 user 0m2.409s 00:05:44.991 sys 0m0.482s 00:05:44.991 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.991 ************************************ 00:05:44.992 END TEST default_locks 00:05:44.992 ************************************ 00:05:44.992 21:37:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.992 21:37:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.992 21:37:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.992 21:37:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.992 21:37:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.992 ************************************ 00:05:44.992 START TEST default_locks_via_rpc 00:05:44.992 ************************************ 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59287 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59287 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59287 ']' 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.992 21:37:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.992 [2024-09-29 21:37:03.849336] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:44.992 [2024-09-29 21:37:03.849473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:05:45.249 [2024-09-29 21:37:03.996620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.249 [2024-09-29 21:37:04.185699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59287 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.814 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59287 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59287 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59287 ']' 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59287 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59287 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.072 killing process with pid 59287 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59287' 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59287 00:05:46.072 21:37:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59287 00:05:47.458 00:05:47.458 real 0m2.509s 00:05:47.458 user 0m2.480s 00:05:47.458 sys 0m0.470s 00:05:47.458 21:37:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.458 21:37:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.458 ************************************ 00:05:47.458 END TEST default_locks_via_rpc 00:05:47.458 ************************************ 00:05:47.458 21:37:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:47.458 21:37:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.458 21:37:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.458 21:37:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.458 ************************************ 00:05:47.458 START TEST non_locking_app_on_locked_coremask 00:05:47.458 ************************************ 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59339 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59339 /var/tmp/spdk.sock 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59339 ']' 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.458 21:37:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.458 [2024-09-29 21:37:06.408045] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:47.459 [2024-09-29 21:37:06.408181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59339 ] 00:05:47.718 [2024-09-29 21:37:06.557267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.976 [2024-09-29 21:37:06.846243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59355 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59355 /var/tmp/spdk2.sock 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59355 ']' 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.539 21:37:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.796 [2024-09-29 21:37:07.560458] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:48.796 [2024-09-29 21:37:07.560583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59355 ] 00:05:48.796 [2024-09-29 21:37:07.716993] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.796 [2024-09-29 21:37:07.717050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.361 [2024-09-29 21:37:08.164132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.293 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.293 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.293 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59339 00:05:50.293 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.293 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59339 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59339 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59339 ']' 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59339 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59339 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.860 killing process with pid 59339 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59339' 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59339 00:05:50.860 21:37:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59339 00:05:53.390 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59355 00:05:53.390 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59355 ']' 00:05:53.390 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59355 00:05:53.390 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:53.390 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.390 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59355 00:05:53.390 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.390 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.391 killing process with pid 59355 00:05:53.391 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59355' 00:05:53.391 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59355 00:05:53.391 21:37:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59355 00:05:54.765 00:05:54.765 real 0m7.316s 00:05:54.765 user 0m7.418s 00:05:54.765 sys 0m0.988s 00:05:54.765 21:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.765 21:37:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.765 ************************************ 00:05:54.765 END TEST non_locking_app_on_locked_coremask 00:05:54.765 ************************************ 00:05:54.765 21:37:13 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:54.765 21:37:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.766 21:37:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.766 21:37:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.766 ************************************ 00:05:54.766 START TEST locking_app_on_unlocked_coremask 00:05:54.766 ************************************ 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59457 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59457 /var/tmp/spdk.sock 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59457 ']' 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.766 21:37:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.024 [2024-09-29 21:37:13.761586] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:55.024 [2024-09-29 21:37:13.761708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59457 ] 00:05:55.024 [2024-09-29 21:37:13.909075] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.024 [2024-09-29 21:37:13.909118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.289 [2024-09-29 21:37:14.087986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59473 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59473 /var/tmp/spdk2.sock 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59473 ']' 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.866 21:37:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.866 [2024-09-29 21:37:14.699401] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:55.866 [2024-09-29 21:37:14.699522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59473 ] 00:05:55.866 [2024-09-29 21:37:14.847238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.432 [2024-09-29 21:37:15.213168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.366 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.366 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:57.366 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59473 00:05:57.366 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59473 00:05:57.366 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59457 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59457 ']' 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59457 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59457 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.931 killing process with pid 59457 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59457' 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59457 00:05:57.931 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59457 00:06:00.461 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59473 00:06:00.461 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59473 ']' 00:06:00.461 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59473 00:06:00.461 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.461 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.461 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59473 00:06:00.720 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.720 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.720 killing process with pid 59473 00:06:00.720 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59473' 00:06:00.720 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59473 00:06:00.720 21:37:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59473 00:06:02.102 00:06:02.102 real 0m7.150s 00:06:02.102 user 0m7.315s 00:06:02.102 sys 0m0.972s 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.102 ************************************ 00:06:02.102 END TEST locking_app_on_unlocked_coremask 00:06:02.102 ************************************ 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.102 21:37:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.102 21:37:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.102 21:37:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.102 21:37:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.102 ************************************ 00:06:02.102 START TEST locking_app_on_locked_coremask 00:06:02.102 ************************************ 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59575 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59575 /var/tmp/spdk.sock 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59575 ']' 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.102 21:37:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.102 [2024-09-29 21:37:20.966855] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:02.102 [2024-09-29 21:37:20.966966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59575 ] 00:06:02.361 [2024-09-29 21:37:21.110139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.361 [2024-09-29 21:37:21.295974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59591 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59591 /var/tmp/spdk2.sock 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59591 /var/tmp/spdk2.sock 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59591 /var/tmp/spdk2.sock 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59591 ']' 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.927 21:37:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.927 [2024-09-29 21:37:21.901326] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:02.927 [2024-09-29 21:37:21.901470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59591 ] 00:06:03.185 [2024-09-29 21:37:22.047976] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59575 has claimed it. 00:06:03.186 [2024-09-29 21:37:22.048036] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.751 ERROR: process (pid: 59591) is no longer running 00:06:03.751 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59591) - No such process 00:06:03.751 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.751 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:03.751 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:03.751 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.751 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.752 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.752 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59575 00:06:03.752 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59575 00:06:03.752 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59575 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59575 ']' 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59575 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59575 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.010 killing process with pid 59575 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59575' 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59575 00:06:04.010 21:37:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59575 00:06:05.385 00:06:05.385 real 0m3.301s 00:06:05.385 user 0m3.480s 00:06:05.385 sys 0m0.599s 00:06:05.385 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.385 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.385 ************************************ 00:06:05.385 END TEST locking_app_on_locked_coremask 00:06:05.385 ************************************ 00:06:05.385 21:37:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:05.385 21:37:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.385 21:37:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.385 21:37:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.385 ************************************ 00:06:05.385 START TEST locking_overlapped_coremask 00:06:05.385 ************************************ 00:06:05.385 21:37:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:05.385 21:37:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59650 00:06:05.385 21:37:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59650 /var/tmp/spdk.sock 00:06:05.386 21:37:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59650 ']' 00:06:05.386 21:37:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.386 21:37:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.386 21:37:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.386 21:37:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.386 21:37:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:05.386 21:37:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.386 [2024-09-29 21:37:24.331088] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:05.386 [2024-09-29 21:37:24.331196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59650 ] 00:06:05.642 [2024-09-29 21:37:24.477572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.899 [2024-09-29 21:37:24.668663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.899 [2024-09-29 21:37:24.669352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.899 [2024-09-29 21:37:24.669368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59668 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59668 /var/tmp/spdk2.sock 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59668 /var/tmp/spdk2.sock 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59668 /var/tmp/spdk2.sock 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59668 ']' 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.463 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.463 [2024-09-29 21:37:25.320593] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:06.463 [2024-09-29 21:37:25.320719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59668 ] 00:06:06.726 [2024-09-29 21:37:25.476364] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59650 has claimed it. 00:06:06.726 [2024-09-29 21:37:25.480416] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.001 ERROR: process (pid: 59668) is no longer running 00:06:07.001 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59668) - No such process 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59650 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59650 ']' 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59650 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59650 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.001 killing process with pid 59650 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59650' 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59650 00:06:07.001 21:37:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59650 00:06:08.375 00:06:08.375 real 0m3.074s 00:06:08.375 user 0m8.042s 00:06:08.375 sys 0m0.496s 00:06:08.375 21:37:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.375 ************************************ 00:06:08.375 END TEST locking_overlapped_coremask 00:06:08.375 ************************************ 00:06:08.375 21:37:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.634 21:37:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:08.634 21:37:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.634 21:37:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.634 21:37:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.634 ************************************ 00:06:08.634 START TEST locking_overlapped_coremask_via_rpc 00:06:08.634 ************************************ 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59721 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59721 /var/tmp/spdk.sock 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59721 ']' 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.634 21:37:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:08.634 [2024-09-29 21:37:27.454145] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:08.634 [2024-09-29 21:37:27.454271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59721 ] 00:06:08.634 [2024-09-29 21:37:27.601930] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.634 [2024-09-29 21:37:27.601982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.892 [2024-09-29 21:37:27.792718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.892 [2024-09-29 21:37:27.793421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.892 [2024-09-29 21:37:27.793467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:09.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59739 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59739 /var/tmp/spdk2.sock 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59739 ']' 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.458 21:37:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.458 [2024-09-29 21:37:28.411940] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:09.458 [2024-09-29 21:37:28.412069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:06:09.716 [2024-09-29 21:37:28.566472] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.716 [2024-09-29 21:37:28.570403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.973 [2024-09-29 21:37:28.949000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.973 [2024-09-29 21:37:28.952535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.973 [2024-09-29 21:37:28.952546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.346 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.346 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.346 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.347 [2024-09-29 21:37:30.135601] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59721 has claimed it. 00:06:11.347 request: 00:06:11.347 { 00:06:11.347 "method": "framework_enable_cpumask_locks", 00:06:11.347 "req_id": 1 00:06:11.347 } 00:06:11.347 Got JSON-RPC error response 00:06:11.347 response: 00:06:11.347 { 00:06:11.347 "code": -32603, 00:06:11.347 "message": "Failed to claim CPU core: 2" 00:06:11.347 } 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59721 /var/tmp/spdk.sock 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59721 ']' 00:06:11.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.347 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59739 /var/tmp/spdk2.sock 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59739 ']' 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.604 ************************************ 00:06:11.604 END TEST locking_overlapped_coremask_via_rpc 00:06:11.604 ************************************ 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.604 00:06:11.604 real 0m3.202s 00:06:11.604 user 0m1.069s 00:06:11.604 sys 0m0.152s 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.604 21:37:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.862 21:37:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:11.862 21:37:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59721 ]] 00:06:11.862 21:37:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59721 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59721 ']' 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59721 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59721 00:06:11.862 killing process with pid 59721 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59721' 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59721 00:06:11.862 21:37:30 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59721 00:06:13.234 21:37:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59739 ]] 00:06:13.234 21:37:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59739 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59739 ']' 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59739 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59739 00:06:13.234 killing process with pid 59739 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59739' 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59739 00:06:13.234 21:37:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59739 00:06:14.608 21:37:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.608 21:37:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.608 21:37:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59721 ]] 00:06:14.608 21:37:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59721 00:06:14.608 Process with pid 59721 is not found 00:06:14.608 21:37:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59721 ']' 00:06:14.608 21:37:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59721 00:06:14.608 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59721) - No such process 00:06:14.608 21:37:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59721 is not found' 00:06:14.608 21:37:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59739 ]] 00:06:14.608 21:37:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59739 00:06:14.608 21:37:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59739 ']' 00:06:14.608 Process with pid 59739 is not found 00:06:14.608 21:37:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59739 00:06:14.608 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59739) - No such process 00:06:14.608 21:37:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59739 is not found' 00:06:14.608 21:37:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.608 00:06:14.608 real 0m32.278s 00:06:14.608 user 0m54.000s 00:06:14.608 sys 0m5.035s 00:06:14.608 21:37:33 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.608 ************************************ 00:06:14.608 21:37:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.608 END TEST cpu_locks 00:06:14.608 ************************************ 00:06:14.608 ************************************ 00:06:14.608 END TEST event 00:06:14.608 ************************************ 00:06:14.608 00:06:14.608 real 0m59.061s 00:06:14.608 user 1m45.452s 00:06:14.608 sys 0m7.918s 00:06:14.608 21:37:33 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.608 21:37:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.608 21:37:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.608 21:37:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.608 21:37:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.608 21:37:33 -- common/autotest_common.sh@10 -- # set +x 00:06:14.608 ************************************ 00:06:14.608 START TEST thread 00:06:14.608 ************************************ 00:06:14.608 21:37:33 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:14.608 * Looking for test storage... 00:06:14.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:14.608 21:37:33 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.608 21:37:33 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.608 21:37:33 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.608 21:37:33 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.608 21:37:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.608 21:37:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.608 21:37:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.608 21:37:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.608 21:37:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.608 21:37:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.608 21:37:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.608 21:37:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.608 21:37:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.608 21:37:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.608 21:37:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.608 21:37:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:14.608 21:37:33 thread -- scripts/common.sh@345 -- # : 1 00:06:14.608 21:37:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.608 21:37:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.608 21:37:33 thread -- scripts/common.sh@365 -- # decimal 1 00:06:14.608 21:37:33 thread -- scripts/common.sh@353 -- # local d=1 00:06:14.608 21:37:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.608 21:37:33 thread -- scripts/common.sh@355 -- # echo 1 00:06:14.608 21:37:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.608 21:37:33 thread -- scripts/common.sh@366 -- # decimal 2 00:06:14.608 21:37:33 thread -- scripts/common.sh@353 -- # local d=2 00:06:14.608 21:37:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.608 21:37:33 thread -- scripts/common.sh@355 -- # echo 2 00:06:14.608 21:37:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.608 21:37:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.608 21:37:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.608 21:37:33 thread -- scripts/common.sh@368 -- # return 0 00:06:14.608 21:37:33 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.608 21:37:33 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.608 --rc genhtml_branch_coverage=1 00:06:14.608 --rc genhtml_function_coverage=1 00:06:14.608 --rc genhtml_legend=1 00:06:14.608 --rc geninfo_all_blocks=1 00:06:14.608 --rc geninfo_unexecuted_blocks=1 00:06:14.608 00:06:14.608 ' 00:06:14.608 21:37:33 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.608 --rc genhtml_branch_coverage=1 00:06:14.608 --rc genhtml_function_coverage=1 00:06:14.608 --rc genhtml_legend=1 00:06:14.608 --rc geninfo_all_blocks=1 00:06:14.608 --rc geninfo_unexecuted_blocks=1 00:06:14.608 00:06:14.608 ' 00:06:14.608 21:37:33 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.608 --rc genhtml_branch_coverage=1 00:06:14.608 --rc genhtml_function_coverage=1 00:06:14.608 --rc genhtml_legend=1 00:06:14.608 --rc geninfo_all_blocks=1 00:06:14.609 --rc geninfo_unexecuted_blocks=1 00:06:14.609 00:06:14.609 ' 00:06:14.609 21:37:33 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.609 --rc genhtml_branch_coverage=1 00:06:14.609 --rc genhtml_function_coverage=1 00:06:14.609 --rc genhtml_legend=1 00:06:14.609 --rc geninfo_all_blocks=1 00:06:14.609 --rc geninfo_unexecuted_blocks=1 00:06:14.609 00:06:14.609 ' 00:06:14.609 21:37:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.609 21:37:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:14.609 21:37:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.609 21:37:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.609 ************************************ 00:06:14.609 START TEST thread_poller_perf 00:06:14.609 ************************************ 00:06:14.609 21:37:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:14.609 [2024-09-29 21:37:33.558843] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:14.609 [2024-09-29 21:37:33.559126] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59899 ] 00:06:14.867 [2024-09-29 21:37:33.704223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.126 [2024-09-29 21:37:33.920984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.126 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.502 ====================================== 00:06:16.502 busy:2609051360 (cyc) 00:06:16.502 total_run_count: 306000 00:06:16.502 tsc_hz: 2600000000 (cyc) 00:06:16.502 ====================================== 00:06:16.502 poller_cost: 8526 (cyc), 3279 (nsec) 00:06:16.502 00:06:16.502 real 0m1.682s 00:06:16.502 user 0m1.482s 00:06:16.502 ************************************ 00:06:16.502 END TEST thread_poller_perf 00:06:16.502 ************************************ 00:06:16.502 sys 0m0.090s 00:06:16.502 21:37:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.502 21:37:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.502 21:37:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.502 21:37:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:16.502 21:37:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.502 21:37:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.502 ************************************ 00:06:16.502 START TEST thread_poller_perf 00:06:16.502 ************************************ 00:06:16.502 21:37:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.502 [2024-09-29 21:37:35.301058] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:16.502 [2024-09-29 21:37:35.301559] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59937 ] 00:06:16.502 [2024-09-29 21:37:35.451663] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.761 [2024-09-29 21:37:35.674219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.761 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:18.138 ====================================== 00:06:18.138 busy:2603351538 (cyc) 00:06:18.138 total_run_count: 3952000 00:06:18.138 tsc_hz: 2600000000 (cyc) 00:06:18.138 ====================================== 00:06:18.138 poller_cost: 658 (cyc), 253 (nsec) 00:06:18.138 00:06:18.138 real 0m1.686s 00:06:18.138 user 0m1.499s 00:06:18.138 sys 0m0.078s 00:06:18.138 21:37:36 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.138 21:37:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.138 ************************************ 00:06:18.138 END TEST thread_poller_perf 00:06:18.138 ************************************ 00:06:18.138 21:37:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:18.138 00:06:18.138 real 0m3.606s 00:06:18.138 user 0m3.080s 00:06:18.138 sys 0m0.292s 00:06:18.138 21:37:36 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.138 21:37:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.138 ************************************ 00:06:18.138 END TEST thread 00:06:18.138 ************************************ 00:06:18.138 21:37:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:18.138 21:37:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:18.138 21:37:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.138 21:37:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.138 21:37:37 -- common/autotest_common.sh@10 -- # set +x 00:06:18.138 ************************************ 00:06:18.138 START TEST app_cmdline 00:06:18.138 ************************************ 00:06:18.138 21:37:37 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:18.138 * Looking for test storage... 00:06:18.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:18.138 21:37:37 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.138 21:37:37 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.138 21:37:37 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.396 21:37:37 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.396 21:37:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:18.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.397 21:37:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.397 21:37:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.397 21:37:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.397 21:37:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.397 --rc genhtml_branch_coverage=1 00:06:18.397 --rc genhtml_function_coverage=1 00:06:18.397 --rc genhtml_legend=1 00:06:18.397 --rc geninfo_all_blocks=1 00:06:18.397 --rc geninfo_unexecuted_blocks=1 00:06:18.397 00:06:18.397 ' 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.397 --rc genhtml_branch_coverage=1 00:06:18.397 --rc genhtml_function_coverage=1 00:06:18.397 --rc genhtml_legend=1 00:06:18.397 --rc geninfo_all_blocks=1 00:06:18.397 --rc geninfo_unexecuted_blocks=1 00:06:18.397 00:06:18.397 ' 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.397 --rc genhtml_branch_coverage=1 00:06:18.397 --rc genhtml_function_coverage=1 00:06:18.397 --rc genhtml_legend=1 00:06:18.397 --rc geninfo_all_blocks=1 00:06:18.397 --rc geninfo_unexecuted_blocks=1 00:06:18.397 00:06:18.397 ' 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.397 --rc genhtml_branch_coverage=1 00:06:18.397 --rc genhtml_function_coverage=1 00:06:18.397 --rc genhtml_legend=1 00:06:18.397 --rc geninfo_all_blocks=1 00:06:18.397 --rc geninfo_unexecuted_blocks=1 00:06:18.397 00:06:18.397 ' 00:06:18.397 21:37:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:18.397 21:37:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60024 00:06:18.397 21:37:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60024 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60024 ']' 00:06:18.397 21:37:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.397 21:37:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.397 [2024-09-29 21:37:37.249053] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:18.397 [2024-09-29 21:37:37.249283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60024 ] 00:06:18.655 [2024-09-29 21:37:37.390094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.655 [2024-09-29 21:37:37.613715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:19.591 { 00:06:19.591 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:19.591 "fields": { 00:06:19.591 "major": 25, 00:06:19.591 "minor": 1, 00:06:19.591 "patch": 0, 00:06:19.591 "suffix": "-pre", 00:06:19.591 "commit": "09cc66129" 00:06:19.591 } 00:06:19.591 } 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:19.591 21:37:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:19.591 21:37:38 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:19.850 request: 00:06:19.850 { 00:06:19.850 "method": "env_dpdk_get_mem_stats", 00:06:19.850 "req_id": 1 00:06:19.850 } 00:06:19.850 Got JSON-RPC error response 00:06:19.850 response: 00:06:19.850 { 00:06:19.850 "code": -32601, 00:06:19.850 "message": "Method not found" 00:06:19.850 } 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.850 21:37:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60024 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60024 ']' 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60024 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60024 00:06:19.850 killing process with pid 60024 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60024' 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@969 -- # kill 60024 00:06:19.850 21:37:38 app_cmdline -- common/autotest_common.sh@974 -- # wait 60024 00:06:21.228 00:06:21.228 real 0m2.904s 00:06:21.228 user 0m3.131s 00:06:21.228 sys 0m0.473s 00:06:21.228 21:37:39 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.228 ************************************ 00:06:21.228 21:37:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.228 END TEST app_cmdline 00:06:21.228 ************************************ 00:06:21.228 21:37:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:21.228 21:37:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.228 21:37:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.228 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:06:21.228 ************************************ 00:06:21.228 START TEST version 00:06:21.228 ************************************ 00:06:21.228 21:37:39 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:21.228 * Looking for test storage... 00:06:21.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:21.228 21:37:40 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.228 21:37:40 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.228 21:37:40 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.228 21:37:40 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.228 21:37:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.228 21:37:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.228 21:37:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.228 21:37:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.228 21:37:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.228 21:37:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.228 21:37:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.228 21:37:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.228 21:37:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.228 21:37:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.228 21:37:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.228 21:37:40 version -- scripts/common.sh@344 -- # case "$op" in 00:06:21.228 21:37:40 version -- scripts/common.sh@345 -- # : 1 00:06:21.228 21:37:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.228 21:37:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.228 21:37:40 version -- scripts/common.sh@365 -- # decimal 1 00:06:21.228 21:37:40 version -- scripts/common.sh@353 -- # local d=1 00:06:21.228 21:37:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.228 21:37:40 version -- scripts/common.sh@355 -- # echo 1 00:06:21.228 21:37:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.228 21:37:40 version -- scripts/common.sh@366 -- # decimal 2 00:06:21.228 21:37:40 version -- scripts/common.sh@353 -- # local d=2 00:06:21.229 21:37:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.229 21:37:40 version -- scripts/common.sh@355 -- # echo 2 00:06:21.229 21:37:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.229 21:37:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.229 21:37:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.229 21:37:40 version -- scripts/common.sh@368 -- # return 0 00:06:21.229 21:37:40 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.229 21:37:40 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.229 --rc genhtml_branch_coverage=1 00:06:21.229 --rc genhtml_function_coverage=1 00:06:21.229 --rc genhtml_legend=1 00:06:21.229 --rc geninfo_all_blocks=1 00:06:21.229 --rc geninfo_unexecuted_blocks=1 00:06:21.229 00:06:21.229 ' 00:06:21.229 21:37:40 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.229 --rc genhtml_branch_coverage=1 00:06:21.229 --rc genhtml_function_coverage=1 00:06:21.229 --rc genhtml_legend=1 00:06:21.229 --rc geninfo_all_blocks=1 00:06:21.229 --rc geninfo_unexecuted_blocks=1 00:06:21.229 00:06:21.229 ' 00:06:21.229 21:37:40 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.229 --rc genhtml_branch_coverage=1 00:06:21.229 --rc genhtml_function_coverage=1 00:06:21.229 --rc genhtml_legend=1 00:06:21.229 --rc geninfo_all_blocks=1 00:06:21.229 --rc geninfo_unexecuted_blocks=1 00:06:21.229 00:06:21.229 ' 00:06:21.229 21:37:40 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.229 --rc genhtml_branch_coverage=1 00:06:21.229 --rc genhtml_function_coverage=1 00:06:21.229 --rc genhtml_legend=1 00:06:21.229 --rc geninfo_all_blocks=1 00:06:21.229 --rc geninfo_unexecuted_blocks=1 00:06:21.229 00:06:21.229 ' 00:06:21.229 21:37:40 version -- app/version.sh@17 -- # get_header_version major 00:06:21.229 21:37:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.229 21:37:40 version -- app/version.sh@14 -- # cut -f2 00:06:21.229 21:37:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.229 21:37:40 version -- app/version.sh@17 -- # major=25 00:06:21.229 21:37:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:21.229 21:37:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.229 21:37:40 version -- app/version.sh@14 -- # cut -f2 00:06:21.229 21:37:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.229 21:37:40 version -- app/version.sh@18 -- # minor=1 00:06:21.229 21:37:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:21.229 21:37:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.229 21:37:40 version -- app/version.sh@14 -- # cut -f2 00:06:21.229 21:37:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.229 21:37:40 version -- app/version.sh@19 -- # patch=0 00:06:21.229 21:37:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:21.229 21:37:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:21.229 21:37:40 version -- app/version.sh@14 -- # cut -f2 00:06:21.229 21:37:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:21.229 21:37:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:21.229 21:37:40 version -- app/version.sh@22 -- # version=25.1 00:06:21.229 21:37:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:21.229 21:37:40 version -- app/version.sh@28 -- # version=25.1rc0 00:06:21.229 21:37:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:21.229 21:37:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:21.229 21:37:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:21.229 21:37:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:21.229 00:06:21.229 real 0m0.186s 00:06:21.229 user 0m0.125s 00:06:21.229 sys 0m0.090s 00:06:21.229 21:37:40 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.229 21:37:40 version -- common/autotest_common.sh@10 -- # set +x 00:06:21.229 ************************************ 00:06:21.229 END TEST version 00:06:21.229 ************************************ 00:06:21.229 21:37:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:21.229 21:37:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:21.229 21:37:40 -- spdk/autotest.sh@194 -- # uname -s 00:06:21.229 21:37:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:21.229 21:37:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:21.229 21:37:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:21.229 21:37:40 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:21.229 21:37:40 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:21.229 21:37:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:21.229 21:37:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.229 21:37:40 -- common/autotest_common.sh@10 -- # set +x 00:06:21.229 ************************************ 00:06:21.229 START TEST blockdev_nvme 00:06:21.229 ************************************ 00:06:21.229 21:37:40 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:21.489 * Looking for test storage... 00:06:21.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.489 21:37:40 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.489 --rc genhtml_branch_coverage=1 00:06:21.489 --rc genhtml_function_coverage=1 00:06:21.489 --rc genhtml_legend=1 00:06:21.489 --rc geninfo_all_blocks=1 00:06:21.489 --rc geninfo_unexecuted_blocks=1 00:06:21.489 00:06:21.489 ' 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.489 --rc genhtml_branch_coverage=1 00:06:21.489 --rc genhtml_function_coverage=1 00:06:21.489 --rc genhtml_legend=1 00:06:21.489 --rc geninfo_all_blocks=1 00:06:21.489 --rc geninfo_unexecuted_blocks=1 00:06:21.489 00:06:21.489 ' 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.489 --rc genhtml_branch_coverage=1 00:06:21.489 --rc genhtml_function_coverage=1 00:06:21.489 --rc genhtml_legend=1 00:06:21.489 --rc geninfo_all_blocks=1 00:06:21.489 --rc geninfo_unexecuted_blocks=1 00:06:21.489 00:06:21.489 ' 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.489 --rc genhtml_branch_coverage=1 00:06:21.489 --rc genhtml_function_coverage=1 00:06:21.489 --rc genhtml_legend=1 00:06:21.489 --rc geninfo_all_blocks=1 00:06:21.489 --rc geninfo_unexecuted_blocks=1 00:06:21.489 00:06:21.489 ' 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:21.489 21:37:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60197 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:21.489 21:37:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60197 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60197 ']' 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.489 21:37:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:21.489 [2024-09-29 21:37:40.442887] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:21.489 [2024-09-29 21:37:40.443013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60197 ] 00:06:21.749 [2024-09-29 21:37:40.590448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.007 [2024-09-29 21:37:40.777877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.642 21:37:41 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.642 21:37:41 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:22.642 21:37:41 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:22.642 21:37:41 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:22.642 21:37:41 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:22.642 21:37:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:22.642 21:37:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:22.642 21:37:41 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:22.642 21:37:41 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.642 21:37:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.902 21:37:41 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:22.902 21:37:41 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:22.903 21:37:41 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9812cb20-a0ac-4993-8f9d-ee70227f2d49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9812cb20-a0ac-4993-8f9d-ee70227f2d49",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "7b95032b-cb6b-4b02-b9a3-615e171c5565"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7b95032b-cb6b-4b02-b9a3-615e171c5565",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ebc989fa-42b7-41a4-8060-8447547f0d47"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ebc989fa-42b7-41a4-8060-8447547f0d47",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c6b28d71-34ea-4c78-91ea-a4066896859d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c6b28d71-34ea-4c78-91ea-a4066896859d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d2931e68-3bfe-4339-a008-255a1acdf552"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d2931e68-3bfe-4339-a008-255a1acdf552",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e48a3b99-0fd8-4b2b-bd22-05d2fd84f4a1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e48a3b99-0fd8-4b2b-bd22-05d2fd84f4a1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:22.903 21:37:41 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:22.903 21:37:41 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:22.903 21:37:41 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:22.903 21:37:41 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60197 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60197 ']' 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60197 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60197 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.903 killing process with pid 60197 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60197' 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60197 00:06:22.903 21:37:41 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60197 00:06:24.279 21:37:43 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:24.279 21:37:43 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:24.279 21:37:43 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:24.279 21:37:43 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.279 21:37:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.279 ************************************ 00:06:24.279 START TEST bdev_hello_world 00:06:24.279 ************************************ 00:06:24.279 21:37:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:24.279 [2024-09-29 21:37:43.226857] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:24.279 [2024-09-29 21:37:43.226979] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60280 ] 00:06:24.538 [2024-09-29 21:37:43.374499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.797 [2024-09-29 21:37:43.554837] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.363 [2024-09-29 21:37:44.065368] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:25.363 [2024-09-29 21:37:44.065425] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:25.363 [2024-09-29 21:37:44.065441] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:25.363 [2024-09-29 21:37:44.067466] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:25.363 [2024-09-29 21:37:44.067963] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:25.363 [2024-09-29 21:37:44.067985] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:25.363 [2024-09-29 21:37:44.068217] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:25.363 00:06:25.363 [2024-09-29 21:37:44.068238] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:25.929 00:06:25.929 real 0m1.579s 00:06:25.929 user 0m1.293s 00:06:25.929 sys 0m0.180s 00:06:25.929 21:37:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.929 21:37:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:25.929 ************************************ 00:06:25.929 END TEST bdev_hello_world 00:06:25.929 ************************************ 00:06:25.929 21:37:44 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:25.929 21:37:44 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:25.929 21:37:44 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.929 21:37:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.929 ************************************ 00:06:25.929 START TEST bdev_bounds 00:06:25.929 ************************************ 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60317 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60317' 00:06:25.929 Process bdevio pid: 60317 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60317 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 60317 ']' 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.929 21:37:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:25.929 [2024-09-29 21:37:44.849471] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:25.929 [2024-09-29 21:37:44.849593] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60317 ] 00:06:26.187 [2024-09-29 21:37:44.998427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.445 [2024-09-29 21:37:45.180224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.445 [2024-09-29 21:37:45.180565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.445 [2024-09-29 21:37:45.180567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.012 21:37:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.012 21:37:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:27.012 21:37:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:27.012 I/O targets: 00:06:27.012 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:27.012 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:27.012 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:27.012 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:27.012 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:27.012 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:27.012 00:06:27.012 00:06:27.012 CUnit - A unit testing framework for C - Version 2.1-3 00:06:27.012 http://cunit.sourceforge.net/ 00:06:27.012 00:06:27.012 00:06:27.012 Suite: bdevio tests on: Nvme3n1 00:06:27.012 Test: blockdev write read block ...passed 00:06:27.012 Test: blockdev write zeroes read block ...passed 00:06:27.012 Test: blockdev write zeroes read no split ...passed 00:06:27.012 Test: blockdev write zeroes read split ...passed 00:06:27.012 Test: blockdev write zeroes read split partial ...passed 00:06:27.012 Test: blockdev reset ...[2024-09-29 21:37:45.871322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:27.012 passed 00:06:27.012 Test: blockdev write read 8 blocks ...[2024-09-29 21:37:45.874486] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:27.012 passed 00:06:27.012 Test: blockdev write read size > 128k ...passed 00:06:27.012 Test: blockdev write read invalid size ...passed 00:06:27.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:27.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:27.012 Test: blockdev write read max offset ...passed 00:06:27.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:27.012 Test: blockdev writev readv 8 blocks ...passed 00:06:27.012 Test: blockdev writev readv 30 x 1block ...passed 00:06:27.012 Test: blockdev writev readv block ...passed 00:06:27.012 Test: blockdev writev readv size > 128k ...passed 00:06:27.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:27.012 Test: blockdev comparev and writev ...[2024-09-29 21:37:45.880714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b720a000 len:0x1000 00:06:27.012 passed 00:06:27.012 Test: blockdev nvme passthru rw ...passed 00:06:27.012 Test: blockdev nvme passthru vendor specific ...[2024-09-29 21:37:45.880909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:27.012 [2024-09-29 21:37:45.881536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:27.012 [2024-09-29 21:37:45.881649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:27.012 passed 00:06:27.012 Test: blockdev nvme admin passthru ...passed 00:06:27.012 Test: blockdev copy ...passed 00:06:27.012 Suite: bdevio tests on: Nvme2n3 00:06:27.012 Test: blockdev write read block ...passed 00:06:27.012 Test: blockdev write zeroes read block ...passed 00:06:27.012 Test: blockdev write zeroes read no split ...passed 00:06:27.012 Test: blockdev write zeroes read split ...passed 00:06:27.012 Test: blockdev write zeroes read split partial ...passed 00:06:27.012 Test: blockdev reset ...[2024-09-29 21:37:45.943487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:27.012 [2024-09-29 21:37:45.946636] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:27.012 passed 00:06:27.012 Test: blockdev write read 8 blocks ...passed 00:06:27.012 Test: blockdev write read size > 128k ...passed 00:06:27.012 Test: blockdev write read invalid size ...passed 00:06:27.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:27.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:27.012 Test: blockdev write read max offset ...passed 00:06:27.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:27.012 Test: blockdev writev readv 8 blocks ...passed 00:06:27.012 Test: blockdev writev readv 30 x 1block ...passed 00:06:27.012 Test: blockdev writev readv block ...passed 00:06:27.012 Test: blockdev writev readv size > 128k ...passed 00:06:27.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:27.012 Test: blockdev comparev and writev ...[2024-09-29 21:37:45.953515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac804000 len:0x1000 00:06:27.012 [2024-09-29 21:37:45.953719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:27.012 passed 00:06:27.012 Test: blockdev nvme passthru rw ...passed 00:06:27.012 Test: blockdev nvme passthru vendor specific ...[2024-09-29 21:37:45.954786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:27.012 [2024-09-29 21:37:45.954982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:27.012 passed 00:06:27.012 Test: blockdev nvme admin passthru ...passed 00:06:27.012 Test: blockdev copy ...passed 00:06:27.012 Suite: bdevio tests on: Nvme2n2 00:06:27.012 Test: blockdev write read block ...passed 00:06:27.012 Test: blockdev write zeroes read block ...passed 00:06:27.012 Test: blockdev write zeroes read no split ...passed 00:06:27.012 Test: blockdev write zeroes read split ...passed 00:06:27.271 Test: blockdev write zeroes read split partial ...passed 00:06:27.271 Test: blockdev reset ...[2024-09-29 21:37:46.009255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:27.271 [2024-09-29 21:37:46.012222] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:27.271 passed 00:06:27.271 Test: blockdev write read 8 blocks ...passed 00:06:27.271 Test: blockdev write read size > 128k ...passed 00:06:27.271 Test: blockdev write read invalid size ...passed 00:06:27.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:27.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:27.271 Test: blockdev write read max offset ...passed 00:06:27.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:27.271 Test: blockdev writev readv 8 blocks ...passed 00:06:27.271 Test: blockdev writev readv 30 x 1block ...passed 00:06:27.271 Test: blockdev writev readv block ...passed 00:06:27.271 Test: blockdev writev readv size > 128k ...passed 00:06:27.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:27.271 Test: blockdev comparev and writev ...[2024-09-29 21:37:46.018503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c443a000 len:0x1000 00:06:27.271 [2024-09-29 21:37:46.018605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:27.271 passed 00:06:27.271 Test: blockdev nvme passthru rw ...passed 00:06:27.271 Test: blockdev nvme passthru vendor specific ...[2024-09-29 21:37:46.019140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:27.271 passed 00:06:27.271 Test: blockdev nvme admin passthru ...[2024-09-29 21:37:46.019205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:27.271 passed 00:06:27.271 Test: blockdev copy ...passed 00:06:27.271 Suite: bdevio tests on: Nvme2n1 00:06:27.271 Test: blockdev write read block ...passed 00:06:27.271 Test: blockdev write zeroes read block ...passed 00:06:27.271 Test: blockdev write zeroes read no split ...passed 00:06:27.271 Test: blockdev write zeroes read split ...passed 00:06:27.271 Test: blockdev write zeroes read split partial ...passed 00:06:27.271 Test: blockdev reset ...[2024-09-29 21:37:46.060621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:27.271 [2024-09-29 21:37:46.063510] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:27.272 passed 00:06:27.272 Test: blockdev write read 8 blocks ...passed 00:06:27.272 Test: blockdev write read size > 128k ...passed 00:06:27.272 Test: blockdev write read invalid size ...passed 00:06:27.272 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:27.272 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:27.272 Test: blockdev write read max offset ...passed 00:06:27.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:27.272 Test: blockdev writev readv 8 blocks ...passed 00:06:27.272 Test: blockdev writev readv 30 x 1block ...passed 00:06:27.272 Test: blockdev writev readv block ...passed 00:06:27.272 Test: blockdev writev readv size > 128k ...passed 00:06:27.272 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:27.272 Test: blockdev comparev and writev ...[2024-09-29 21:37:46.070113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4434000 len:0x1000 00:06:27.272 [2024-09-29 21:37:46.070206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:27.272 passed 00:06:27.272 Test: blockdev nvme passthru rw ...passed 00:06:27.272 Test: blockdev nvme passthru vendor specific ...[2024-09-29 21:37:46.070787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:27.272 [2024-09-29 21:37:46.070857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:27.272 passed 00:06:27.272 Test: blockdev nvme admin passthru ...passed 00:06:27.272 Test: blockdev copy ...passed 00:06:27.272 Suite: bdevio tests on: Nvme1n1 00:06:27.272 Test: blockdev write read block ...passed 00:06:27.272 Test: blockdev write zeroes read block ...passed 00:06:27.272 Test: blockdev write zeroes read no split ...passed 00:06:27.272 Test: blockdev write zeroes read split ...passed 00:06:27.272 Test: blockdev write zeroes read split partial ...passed 00:06:27.272 Test: blockdev reset ...[2024-09-29 21:37:46.126440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:27.272 [2024-09-29 21:37:46.129025] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:27.272 passed 00:06:27.272 Test: blockdev write read 8 blocks ...passed 00:06:27.272 Test: blockdev write read size > 128k ...passed 00:06:27.272 Test: blockdev write read invalid size ...passed 00:06:27.272 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:27.272 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:27.272 Test: blockdev write read max offset ...passed 00:06:27.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:27.272 Test: blockdev writev readv 8 blocks ...passed 00:06:27.272 Test: blockdev writev readv 30 x 1block ...passed 00:06:27.272 Test: blockdev writev readv block ...passed 00:06:27.272 Test: blockdev writev readv size > 128k ...passed 00:06:27.272 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:27.272 Test: blockdev comparev and writev ...[2024-09-29 21:37:46.138824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4430000 len:0x1000 00:06:27.272 [2024-09-29 21:37:46.138926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:27.272 passed 00:06:27.272 Test: blockdev nvme passthru rw ...passed 00:06:27.272 Test: blockdev nvme passthru vendor specific ...[2024-09-29 21:37:46.139505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:27.272 [2024-09-29 21:37:46.139575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:27.272 passed 00:06:27.272 Test: blockdev nvme admin passthru ...passed 00:06:27.272 Test: blockdev copy ...passed 00:06:27.272 Suite: bdevio tests on: Nvme0n1 00:06:27.272 Test: blockdev write read block ...passed 00:06:27.272 Test: blockdev write zeroes read block ...passed 00:06:27.272 Test: blockdev write zeroes read no split ...passed 00:06:27.272 Test: blockdev write zeroes read split ...passed 00:06:27.272 Test: blockdev write zeroes read split partial ...passed 00:06:27.272 Test: blockdev reset ...[2024-09-29 21:37:46.192111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:27.272 [2024-09-29 21:37:46.194782] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:27.272 passed 00:06:27.272 Test: blockdev write read 8 blocks ...passed 00:06:27.272 Test: blockdev write read size > 128k ...passed 00:06:27.272 Test: blockdev write read invalid size ...passed 00:06:27.272 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:27.272 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:27.272 Test: blockdev write read max offset ...passed 00:06:27.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:27.272 Test: blockdev writev readv 8 blocks ...passed 00:06:27.272 Test: blockdev writev readv 30 x 1block ...passed 00:06:27.272 Test: blockdev writev readv block ...passed 00:06:27.272 Test: blockdev writev readv size > 128k ...passed 00:06:27.272 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:27.272 Test: blockdev comparev and writev ...passed 00:06:27.272 Test: blockdev nvme passthru rw ...[2024-09-29 21:37:46.199955] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:27.272 separate metadata which is not supported yet. 00:06:27.272 passed 00:06:27.272 Test: blockdev nvme passthru vendor specific ...[2024-09-29 21:37:46.200352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:27.272 [2024-09-29 21:37:46.200461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:27.272 passed 00:06:27.272 Test: blockdev nvme admin passthru ...passed 00:06:27.272 Test: blockdev copy ...passed 00:06:27.272 00:06:27.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.272 suites 6 6 n/a 0 0 00:06:27.272 tests 138 138 138 0 0 00:06:27.272 asserts 893 893 893 0 n/a 00:06:27.272 00:06:27.272 Elapsed time = 0.987 seconds 00:06:27.272 0 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60317 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 60317 ']' 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 60317 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60317 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60317' 00:06:27.272 killing process with pid 60317 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 60317 00:06:27.272 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 60317 00:06:28.206 21:37:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:28.206 00:06:28.206 real 0m2.050s 00:06:28.206 user 0m4.930s 00:06:28.206 sys 0m0.307s 00:06:28.206 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.206 21:37:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:28.206 ************************************ 00:06:28.206 END TEST bdev_bounds 00:06:28.206 ************************************ 00:06:28.206 21:37:46 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:28.206 21:37:46 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:28.206 21:37:46 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.206 21:37:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:28.206 ************************************ 00:06:28.206 START TEST bdev_nbd 00:06:28.206 ************************************ 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60371 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60371 /var/tmp/spdk-nbd.sock 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60371 ']' 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:28.206 21:37:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:28.206 [2024-09-29 21:37:46.947289] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:28.206 [2024-09-29 21:37:46.947435] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.206 [2024-09-29 21:37:47.096907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.464 [2024-09-29 21:37:47.278778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:29.028 21:37:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.286 1+0 records in 00:06:29.286 1+0 records out 00:06:29.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521567 s, 7.9 MB/s 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:29.286 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.543 1+0 records in 00:06:29.543 1+0 records out 00:06:29.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385833 s, 10.6 MB/s 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:29.543 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.801 1+0 records in 00:06:29.801 1+0 records out 00:06:29.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453262 s, 9.0 MB/s 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.801 1+0 records in 00:06:29.801 1+0 records out 00:06:29.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360753 s, 11.4 MB/s 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:29.801 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.058 1+0 records in 00:06:30.058 1+0 records out 00:06:30.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374874 s, 10.9 MB/s 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:30.058 21:37:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:30.316 1+0 records in 00:06:30.316 1+0 records out 00:06:30.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596561 s, 6.9 MB/s 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:30.316 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd0", 00:06:30.574 "bdev_name": "Nvme0n1" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd1", 00:06:30.574 "bdev_name": "Nvme1n1" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd2", 00:06:30.574 "bdev_name": "Nvme2n1" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd3", 00:06:30.574 "bdev_name": "Nvme2n2" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd4", 00:06:30.574 "bdev_name": "Nvme2n3" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd5", 00:06:30.574 "bdev_name": "Nvme3n1" 00:06:30.574 } 00:06:30.574 ]' 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd0", 00:06:30.574 "bdev_name": "Nvme0n1" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd1", 00:06:30.574 "bdev_name": "Nvme1n1" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd2", 00:06:30.574 "bdev_name": "Nvme2n1" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd3", 00:06:30.574 "bdev_name": "Nvme2n2" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd4", 00:06:30.574 "bdev_name": "Nvme2n3" 00:06:30.574 }, 00:06:30.574 { 00:06:30.574 "nbd_device": "/dev/nbd5", 00:06:30.574 "bdev_name": "Nvme3n1" 00:06:30.574 } 00:06:30.574 ]' 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.574 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.832 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.090 21:37:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.360 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.636 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.893 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.151 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:32.152 21:37:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:32.410 /dev/nbd0 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.410 1+0 records in 00:06:32.410 1+0 records out 00:06:32.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534075 s, 7.7 MB/s 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:32.410 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:32.668 /dev/nbd1 00:06:32.668 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.668 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.668 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:32.668 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.669 1+0 records in 00:06:32.669 1+0 records out 00:06:32.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079401 s, 5.2 MB/s 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:32.669 /dev/nbd10 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.669 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.928 1+0 records in 00:06:32.928 1+0 records out 00:06:32.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000953142 s, 4.3 MB/s 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:32.928 /dev/nbd11 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.928 1+0 records in 00:06:32.928 1+0 records out 00:06:32.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106192 s, 3.9 MB/s 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:32.928 21:37:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:33.190 /dev/nbd12 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.190 1+0 records in 00:06:33.190 1+0 records out 00:06:33.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377151 s, 10.9 MB/s 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:33.190 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:33.452 /dev/nbd13 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.452 1+0 records in 00:06:33.452 1+0 records out 00:06:33.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505218 s, 8.1 MB/s 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.452 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd0", 00:06:33.713 "bdev_name": "Nvme0n1" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd1", 00:06:33.713 "bdev_name": "Nvme1n1" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd10", 00:06:33.713 "bdev_name": "Nvme2n1" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd11", 00:06:33.713 "bdev_name": "Nvme2n2" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd12", 00:06:33.713 "bdev_name": "Nvme2n3" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd13", 00:06:33.713 "bdev_name": "Nvme3n1" 00:06:33.713 } 00:06:33.713 ]' 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd0", 00:06:33.713 "bdev_name": "Nvme0n1" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd1", 00:06:33.713 "bdev_name": "Nvme1n1" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd10", 00:06:33.713 "bdev_name": "Nvme2n1" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd11", 00:06:33.713 "bdev_name": "Nvme2n2" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd12", 00:06:33.713 "bdev_name": "Nvme2n3" 00:06:33.713 }, 00:06:33.713 { 00:06:33.713 "nbd_device": "/dev/nbd13", 00:06:33.713 "bdev_name": "Nvme3n1" 00:06:33.713 } 00:06:33.713 ]' 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.713 /dev/nbd1 00:06:33.713 /dev/nbd10 00:06:33.713 /dev/nbd11 00:06:33.713 /dev/nbd12 00:06:33.713 /dev/nbd13' 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.713 /dev/nbd1 00:06:33.713 /dev/nbd10 00:06:33.713 /dev/nbd11 00:06:33.713 /dev/nbd12 00:06:33.713 /dev/nbd13' 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:33.713 256+0 records in 00:06:33.713 256+0 records out 00:06:33.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00990822 s, 106 MB/s 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.713 256+0 records in 00:06:33.713 256+0 records out 00:06:33.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0674605 s, 15.5 MB/s 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.713 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.974 256+0 records in 00:06:33.974 256+0 records out 00:06:33.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0677292 s, 15.5 MB/s 00:06:33.974 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.974 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:33.974 256+0 records in 00:06:33.974 256+0 records out 00:06:33.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0702965 s, 14.9 MB/s 00:06:33.974 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.974 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:33.974 256+0 records in 00:06:33.974 256+0 records out 00:06:33.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0676558 s, 15.5 MB/s 00:06:33.974 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.974 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:34.235 256+0 records in 00:06:34.235 256+0 records out 00:06:34.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0673777 s, 15.6 MB/s 00:06:34.235 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.235 21:37:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:34.235 256+0 records in 00:06:34.235 256+0 records out 00:06:34.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0659405 s, 15.9 MB/s 00:06:34.235 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:34.235 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.236 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.498 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.760 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.022 21:37:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.283 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.542 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:35.801 malloc_lvol_verify 00:06:35.801 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:36.059 38b5f8e1-c226-40f6-aee5-9cd08b3aeccf 00:06:36.059 21:37:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:36.317 0ce5132f-350d-4a0b-82ec-9eb0f08b826c 00:06:36.317 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:36.574 /dev/nbd0 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:36.574 mke2fs 1.47.0 (5-Feb-2023) 00:06:36.574 Discarding device blocks: 0/4096 done 00:06:36.574 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:36.574 00:06:36.574 Allocating group tables: 0/1 done 00:06:36.574 Writing inode tables: 0/1 done 00:06:36.574 Creating journal (1024 blocks): done 00:06:36.574 Writing superblocks and filesystem accounting information: 0/1 done 00:06:36.574 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:36.574 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.575 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60371 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60371 ']' 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60371 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60371 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.833 killing process with pid 60371 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60371' 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60371 00:06:36.833 21:37:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60371 00:06:37.767 21:37:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:37.767 00:06:37.767 real 0m9.528s 00:06:37.767 user 0m13.632s 00:06:37.767 sys 0m3.034s 00:06:37.767 ************************************ 00:06:37.767 END TEST bdev_nbd 00:06:37.767 ************************************ 00:06:37.767 21:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.767 21:37:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:37.767 21:37:56 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:37.767 21:37:56 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:37.767 skipping fio tests on NVMe due to multi-ns failures. 00:06:37.767 21:37:56 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:37.767 21:37:56 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:37.767 21:37:56 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:37.767 21:37:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:37.767 21:37:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.767 21:37:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.767 ************************************ 00:06:37.767 START TEST bdev_verify 00:06:37.767 ************************************ 00:06:37.767 21:37:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:37.767 [2024-09-29 21:37:56.514728] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:37.768 [2024-09-29 21:37:56.514847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60739 ] 00:06:37.768 [2024-09-29 21:37:56.664743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.025 [2024-09-29 21:37:56.876743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.025 [2024-09-29 21:37:56.876818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.590 Running I/O for 5 seconds... 00:06:43.696 26112.00 IOPS, 102.00 MiB/s 26464.00 IOPS, 103.38 MiB/s 26624.00 IOPS, 104.00 MiB/s 26496.00 IOPS, 103.50 MiB/s 26316.80 IOPS, 102.80 MiB/s 00:06:43.696 Latency(us) 00:06:43.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.696 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x0 length 0xbd0bd 00:06:43.696 Nvme0n1 : 5.05 2180.19 8.52 0.00 0.00 58566.86 14317.10 60898.07 00:06:43.696 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:43.696 Nvme0n1 : 5.05 2153.19 8.41 0.00 0.00 59263.67 14821.22 67754.14 00:06:43.696 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x0 length 0xa0000 00:06:43.696 Nvme1n1 : 5.05 2179.56 8.51 0.00 0.00 58497.82 16131.94 49807.36 00:06:43.696 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0xa0000 length 0xa0000 00:06:43.696 Nvme1n1 : 5.06 2151.63 8.40 0.00 0.00 59134.03 16837.71 54445.29 00:06:43.696 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x0 length 0x80000 00:06:43.696 Nvme2n1 : 5.05 2179.00 8.51 0.00 0.00 58434.29 15224.52 50009.01 00:06:43.696 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x80000 length 0x80000 00:06:43.696 Nvme2n1 : 5.06 2149.81 8.40 0.00 0.00 59017.65 15627.82 48597.46 00:06:43.696 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x0 length 0x80000 00:06:43.696 Nvme2n2 : 5.05 2178.40 8.51 0.00 0.00 58351.62 14317.10 47185.92 00:06:43.696 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x80000 length 0x80000 00:06:43.696 Nvme2n2 : 5.08 2156.10 8.42 0.00 0.00 58746.96 4990.82 47185.92 00:06:43.696 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x0 length 0x80000 00:06:43.696 Nvme2n3 : 5.06 2176.87 8.50 0.00 0.00 58274.92 13409.67 46580.97 00:06:43.696 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x80000 length 0x80000 00:06:43.696 Nvme2n3 : 5.09 2162.69 8.45 0.00 0.00 58550.67 11141.12 48799.11 00:06:43.696 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x0 length 0x20000 00:06:43.696 Nvme3n1 : 5.07 2185.13 8.54 0.00 0.00 57992.68 2508.01 46782.62 00:06:43.696 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:43.696 Verification LBA range: start 0x20000 length 0x20000 00:06:43.696 Nvme3n1 : 5.09 2162.12 8.45 0.00 0.00 58510.98 9376.69 50412.31 00:06:43.696 =================================================================================================================== 00:06:43.696 Total : 26014.68 101.62 0.00 0.00 58609.95 2508.01 67754.14 00:06:45.595 00:06:45.595 real 0m7.769s 00:06:45.595 user 0m14.374s 00:06:45.595 sys 0m0.238s 00:06:45.595 21:38:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.595 ************************************ 00:06:45.595 END TEST bdev_verify 00:06:45.595 ************************************ 00:06:45.595 21:38:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:45.595 21:38:04 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:45.595 21:38:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:45.595 21:38:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.595 21:38:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.595 ************************************ 00:06:45.595 START TEST bdev_verify_big_io 00:06:45.595 ************************************ 00:06:45.595 21:38:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:45.595 [2024-09-29 21:38:04.338425] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:45.595 [2024-09-29 21:38:04.338539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60837 ] 00:06:45.595 [2024-09-29 21:38:04.489237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.853 [2024-09-29 21:38:04.714748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.853 [2024-09-29 21:38:04.714807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.787 Running I/O for 5 seconds... 00:06:52.595 862.00 IOPS, 53.88 MiB/s 2448.50 IOPS, 153.03 MiB/s 3039.33 IOPS, 189.96 MiB/s 00:06:52.595 Latency(us) 00:06:52.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.595 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x0 length 0xbd0b 00:06:52.595 Nvme0n1 : 5.61 136.99 8.56 0.00 0.00 908036.59 36901.81 1148594.02 00:06:52.595 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:52.595 Nvme0n1 : 5.74 133.86 8.37 0.00 0.00 924769.81 26214.40 1135688.47 00:06:52.595 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x0 length 0xa000 00:06:52.595 Nvme1n1 : 5.61 136.92 8.56 0.00 0.00 872657.53 109697.18 948557.98 00:06:52.595 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0xa000 length 0xa000 00:06:52.595 Nvme1n1 : 5.74 133.75 8.36 0.00 0.00 892810.37 91145.45 961463.53 00:06:52.595 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x0 length 0x8000 00:06:52.595 Nvme2n1 : 5.74 137.48 8.59 0.00 0.00 830827.03 110503.78 967916.31 00:06:52.595 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x8000 length 0x8000 00:06:52.595 Nvme2n1 : 5.84 135.16 8.45 0.00 0.00 849961.22 93161.94 884030.23 00:06:52.595 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x0 length 0x8000 00:06:52.595 Nvme2n2 : 5.88 148.36 9.27 0.00 0.00 752139.85 63317.86 987274.63 00:06:52.595 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x8000 length 0x8000 00:06:52.595 Nvme2n2 : 5.88 141.52 8.85 0.00 0.00 792992.51 38313.35 884030.23 00:06:52.595 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x0 length 0x8000 00:06:52.595 Nvme2n3 : 5.92 155.48 9.72 0.00 0.00 696046.68 28634.19 1013085.74 00:06:52.595 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x8000 length 0x8000 00:06:52.595 Nvme2n3 : 5.94 150.89 9.43 0.00 0.00 721038.23 40733.14 884030.23 00:06:52.595 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x0 length 0x2000 00:06:52.595 Nvme3n1 : 6.05 186.32 11.65 0.00 0.00 565610.56 510.42 1038896.84 00:06:52.595 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:52.595 Verification LBA range: start 0x2000 length 0x2000 00:06:52.595 Nvme3n1 : 6.05 169.20 10.57 0.00 0.00 624553.38 819.20 884030.23 00:06:52.595 =================================================================================================================== 00:06:52.595 Total : 1765.93 110.37 0.00 0.00 772059.38 510.42 1148594.02 00:06:55.125 00:06:55.125 real 0m9.220s 00:06:55.125 user 0m17.228s 00:06:55.125 sys 0m0.285s 00:06:55.125 21:38:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.125 21:38:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:55.125 ************************************ 00:06:55.125 END TEST bdev_verify_big_io 00:06:55.125 ************************************ 00:06:55.125 21:38:13 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.125 21:38:13 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:55.125 21:38:13 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.125 21:38:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.125 ************************************ 00:06:55.125 START TEST bdev_write_zeroes 00:06:55.125 ************************************ 00:06:55.125 21:38:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:55.125 [2024-09-29 21:38:13.609653] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:55.125 [2024-09-29 21:38:13.609769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60959 ] 00:06:55.125 [2024-09-29 21:38:13.757579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.125 [2024-09-29 21:38:13.975081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.690 Running I/O for 1 seconds... 00:06:56.620 67064.00 IOPS, 261.97 MiB/s 00:06:56.620 Latency(us) 00:06:56.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.620 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.620 Nvme0n1 : 1.02 11108.44 43.39 0.00 0.00 11495.58 6604.01 22483.89 00:06:56.620 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.620 Nvme1n1 : 1.02 11118.14 43.43 0.00 0.00 11472.97 6805.66 22685.54 00:06:56.620 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.620 Nvme2n1 : 1.03 11105.54 43.38 0.00 0.00 11458.70 5696.59 21273.99 00:06:56.620 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.620 Nvme2n2 : 1.03 11092.99 43.33 0.00 0.00 11451.74 5167.26 20366.57 00:06:56.620 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.620 Nvme2n3 : 1.03 11032.68 43.10 0.00 0.00 11494.76 4965.61 21374.82 00:06:56.620 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:56.620 Nvme3n1 : 1.03 11005.68 42.99 0.00 0.00 11509.65 8973.39 22887.19 00:06:56.620 =================================================================================================================== 00:06:56.620 Total : 66463.47 259.62 0.00 0.00 11480.52 4965.61 22887.19 00:06:57.553 ************************************ 00:06:57.553 END TEST bdev_write_zeroes 00:06:57.553 ************************************ 00:06:57.553 00:06:57.553 real 0m2.956s 00:06:57.553 user 0m2.612s 00:06:57.553 sys 0m0.227s 00:06:57.553 21:38:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.553 21:38:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:57.811 21:38:16 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.811 21:38:16 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:57.811 21:38:16 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.811 21:38:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.811 ************************************ 00:06:57.811 START TEST bdev_json_nonenclosed 00:06:57.811 ************************************ 00:06:57.811 21:38:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:57.811 [2024-09-29 21:38:16.619581] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:57.811 [2024-09-29 21:38:16.619696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61012 ] 00:06:57.811 [2024-09-29 21:38:16.771931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.070 [2024-09-29 21:38:16.959611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.070 [2024-09-29 21:38:16.959696] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:58.070 [2024-09-29 21:38:16.959714] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:58.070 [2024-09-29 21:38:16.959724] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.328 00:06:58.328 real 0m0.691s 00:06:58.328 user 0m0.488s 00:06:58.328 sys 0m0.098s 00:06:58.328 ************************************ 00:06:58.328 END TEST bdev_json_nonenclosed 00:06:58.328 ************************************ 00:06:58.328 21:38:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.328 21:38:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:58.328 21:38:17 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.328 21:38:17 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:58.328 21:38:17 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.328 21:38:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.328 ************************************ 00:06:58.328 START TEST bdev_json_nonarray 00:06:58.328 ************************************ 00:06:58.328 21:38:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.586 [2024-09-29 21:38:17.374091] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:58.586 [2024-09-29 21:38:17.374212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61043 ] 00:06:58.586 [2024-09-29 21:38:17.524697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.846 [2024-09-29 21:38:17.708714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.846 [2024-09-29 21:38:17.708805] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:58.846 [2024-09-29 21:38:17.708824] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:58.846 [2024-09-29 21:38:17.708834] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.105 00:06:59.105 real 0m0.693s 00:06:59.105 user 0m0.481s 00:06:59.105 sys 0m0.107s 00:06:59.105 21:38:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.105 ************************************ 00:06:59.105 END TEST bdev_json_nonarray 00:06:59.105 ************************************ 00:06:59.105 21:38:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:59.105 21:38:18 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:59.105 00:06:59.105 real 0m37.845s 00:06:59.105 user 0m58.072s 00:06:59.105 sys 0m5.256s 00:06:59.105 21:38:18 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.105 21:38:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.105 ************************************ 00:06:59.105 END TEST blockdev_nvme 00:06:59.105 ************************************ 00:06:59.363 21:38:18 -- spdk/autotest.sh@209 -- # uname -s 00:06:59.363 21:38:18 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:59.363 21:38:18 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:59.363 21:38:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.363 21:38:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.363 21:38:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.363 ************************************ 00:06:59.363 START TEST blockdev_nvme_gpt 00:06:59.363 ************************************ 00:06:59.363 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:59.363 * Looking for test storage... 00:06:59.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:59.363 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:59.363 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:59.363 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:06:59.363 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.363 21:38:18 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:59.363 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.363 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:59.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.363 --rc genhtml_branch_coverage=1 00:06:59.363 --rc genhtml_function_coverage=1 00:06:59.363 --rc genhtml_legend=1 00:06:59.363 --rc geninfo_all_blocks=1 00:06:59.363 --rc geninfo_unexecuted_blocks=1 00:06:59.363 00:06:59.363 ' 00:06:59.363 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:59.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.363 --rc genhtml_branch_coverage=1 00:06:59.363 --rc genhtml_function_coverage=1 00:06:59.363 --rc genhtml_legend=1 00:06:59.363 --rc geninfo_all_blocks=1 00:06:59.364 --rc geninfo_unexecuted_blocks=1 00:06:59.364 00:06:59.364 ' 00:06:59.364 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:59.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.364 --rc genhtml_branch_coverage=1 00:06:59.364 --rc genhtml_function_coverage=1 00:06:59.364 --rc genhtml_legend=1 00:06:59.364 --rc geninfo_all_blocks=1 00:06:59.364 --rc geninfo_unexecuted_blocks=1 00:06:59.364 00:06:59.364 ' 00:06:59.364 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:59.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.364 --rc genhtml_branch_coverage=1 00:06:59.364 --rc genhtml_function_coverage=1 00:06:59.364 --rc genhtml_legend=1 00:06:59.364 --rc geninfo_all_blocks=1 00:06:59.364 --rc geninfo_unexecuted_blocks=1 00:06:59.364 00:06:59.364 ' 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61127 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61127 00:06:59.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.364 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61127 ']' 00:06:59.364 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.364 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.364 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.364 21:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:59.364 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.364 21:38:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.364 [2024-09-29 21:38:18.336158] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:59.364 [2024-09-29 21:38:18.336280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61127 ] 00:06:59.622 [2024-09-29 21:38:18.485099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.881 [2024-09-29 21:38:18.670747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.450 21:38:19 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.450 21:38:19 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:00.450 21:38:19 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:00.450 21:38:19 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:00.450 21:38:19 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:00.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.966 Waiting for block devices as requested 00:07:00.966 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.966 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.967 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:01.225 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.490 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:06.490 BYT; 00:07:06.490 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:06.490 BYT; 00:07:06.490 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:06.490 21:38:25 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:06.490 21:38:25 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:07.423 The operation has completed successfully. 00:07:07.423 21:38:26 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:08.357 The operation has completed successfully. 00:07:08.357 21:38:27 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:08.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:09.182 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.182 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.182 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.441 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:09.441 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:09.441 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.441 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.441 [] 00:07:09.441 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.441 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:09.441 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:09.441 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:09.441 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:09.441 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:09.441 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.441 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.700 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:09.700 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:09.701 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7fa5f7b3-348d-4d2e-abc7-fa445fabde35"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7fa5f7b3-348d-4d2e-abc7-fa445fabde35",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "745ab879-6143-4cb9-894c-4e1b38ba4ee8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "745ab879-6143-4cb9-894c-4e1b38ba4ee8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0598b1b5-750f-4ceb-830f-49cd364d4aec"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0598b1b5-750f-4ceb-830f-49cd364d4aec",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9d45fde1-2b58-40dc-92ac-2f99ce119ffa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9d45fde1-2b58-40dc-92ac-2f99ce119ffa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ab658905-e2c2-4c86-9301-7e2b51d73705"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ab658905-e2c2-4c86-9301-7e2b51d73705",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:09.958 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:09.958 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:09.958 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:09.958 21:38:28 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61127 00:07:09.958 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61127 ']' 00:07:09.958 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61127 00:07:09.958 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:09.958 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.958 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61127 00:07:09.958 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.958 killing process with pid 61127 00:07:09.959 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.959 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61127' 00:07:09.959 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61127 00:07:09.959 21:38:28 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61127 00:07:11.332 21:38:29 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:11.332 21:38:29 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:11.332 21:38:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:11.332 21:38:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.332 21:38:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.332 ************************************ 00:07:11.332 START TEST bdev_hello_world 00:07:11.332 ************************************ 00:07:11.332 21:38:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:11.332 [2024-09-29 21:38:30.023587] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:11.332 [2024-09-29 21:38:30.023683] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61745 ] 00:07:11.332 [2024-09-29 21:38:30.166162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.590 [2024-09-29 21:38:30.318288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.848 [2024-09-29 21:38:30.812338] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:11.848 [2024-09-29 21:38:30.812383] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:11.848 [2024-09-29 21:38:30.812407] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:11.848 [2024-09-29 21:38:30.814349] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:11.848 [2024-09-29 21:38:30.814861] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:11.848 [2024-09-29 21:38:30.814886] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:11.848 [2024-09-29 21:38:30.815065] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:11.848 00:07:11.848 [2024-09-29 21:38:30.815085] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:12.782 00:07:12.782 real 0m1.492s 00:07:12.782 user 0m1.203s 00:07:12.782 sys 0m0.184s 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:12.782 ************************************ 00:07:12.782 END TEST bdev_hello_world 00:07:12.782 ************************************ 00:07:12.782 21:38:31 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:12.782 21:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.782 21:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.782 21:38:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.782 ************************************ 00:07:12.782 START TEST bdev_bounds 00:07:12.782 ************************************ 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:12.782 Process bdevio pid: 61786 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61786 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61786' 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61786 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61786 ']' 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.782 21:38:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:12.782 [2024-09-29 21:38:31.560557] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:12.782 [2024-09-29 21:38:31.560681] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61786 ] 00:07:12.782 [2024-09-29 21:38:31.709291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.040 [2024-09-29 21:38:31.863773] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.040 [2024-09-29 21:38:31.863991] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.040 [2024-09-29 21:38:31.864217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.605 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.605 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:13.605 21:38:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:13.605 I/O targets: 00:07:13.605 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:13.605 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:13.605 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:13.605 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.605 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.605 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:13.605 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:13.605 00:07:13.605 00:07:13.605 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.605 http://cunit.sourceforge.net/ 00:07:13.605 00:07:13.605 00:07:13.605 Suite: bdevio tests on: Nvme3n1 00:07:13.605 Test: blockdev write read block ...passed 00:07:13.605 Test: blockdev write zeroes read block ...passed 00:07:13.605 Test: blockdev write zeroes read no split ...passed 00:07:13.605 Test: blockdev write zeroes read split ...passed 00:07:13.605 Test: blockdev write zeroes read split partial ...passed 00:07:13.605 Test: blockdev reset ...[2024-09-29 21:38:32.528805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:13.605 [2024-09-29 21:38:32.531723] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.605 passed 00:07:13.605 Test: blockdev write read 8 blocks ...passed 00:07:13.605 Test: blockdev write read size > 128k ...passed 00:07:13.605 Test: blockdev write read invalid size ...passed 00:07:13.605 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.605 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.605 Test: blockdev write read max offset ...passed 00:07:13.605 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.605 Test: blockdev writev readv 8 blocks ...passed 00:07:13.605 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.605 Test: blockdev writev readv block ...passed 00:07:13.605 Test: blockdev writev readv size > 128k ...passed 00:07:13.605 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.605 Test: blockdev comparev and writev ...[2024-09-29 21:38:32.538588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6a06000 len:0x1000 00:07:13.606 [2024-09-29 21:38:32.538633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.606 passed 00:07:13.606 Test: blockdev nvme passthru rw ...passed 00:07:13.606 Test: blockdev nvme passthru vendor specific ...[2024-09-29 21:38:32.539296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.606 [2024-09-29 21:38:32.539330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.606 passed 00:07:13.606 Test: blockdev nvme admin passthru ...passed 00:07:13.606 Test: blockdev copy ...passed 00:07:13.606 Suite: bdevio tests on: Nvme2n3 00:07:13.606 Test: blockdev write read block ...passed 00:07:13.606 Test: blockdev write zeroes read block ...passed 00:07:13.606 Test: blockdev write zeroes read no split ...passed 00:07:13.606 Test: blockdev write zeroes read split ...passed 00:07:13.864 Test: blockdev write zeroes read split partial ...passed 00:07:13.864 Test: blockdev reset ...[2024-09-29 21:38:32.599098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:13.864 [2024-09-29 21:38:32.601963] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.864 passed 00:07:13.864 Test: blockdev write read 8 blocks ...passed 00:07:13.864 Test: blockdev write read size > 128k ...passed 00:07:13.864 Test: blockdev write read invalid size ...passed 00:07:13.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.864 Test: blockdev write read max offset ...passed 00:07:13.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.864 Test: blockdev writev readv 8 blocks ...passed 00:07:13.864 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.864 Test: blockdev writev readv block ...passed 00:07:13.864 Test: blockdev writev readv size > 128k ...passed 00:07:13.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.864 Test: blockdev comparev and writev ...[2024-09-29 21:38:32.608919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfa3c000 len:0x1000 00:07:13.864 [2024-09-29 21:38:32.608968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.864 passed 00:07:13.864 Test: blockdev nvme passthru rw ...passed 00:07:13.864 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.864 Test: blockdev nvme admin passthru ...[2024-09-29 21:38:32.609757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.864 [2024-09-29 21:38:32.609785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.864 passed 00:07:13.864 Test: blockdev copy ...passed 00:07:13.864 Suite: bdevio tests on: Nvme2n2 00:07:13.864 Test: blockdev write read block ...passed 00:07:13.864 Test: blockdev write zeroes read block ...passed 00:07:13.864 Test: blockdev write zeroes read no split ...passed 00:07:13.864 Test: blockdev write zeroes read split ...passed 00:07:13.864 Test: blockdev write zeroes read split partial ...passed 00:07:13.864 Test: blockdev reset ...[2024-09-29 21:38:32.674298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:13.864 [2024-09-29 21:38:32.677465] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.864 passed 00:07:13.864 Test: blockdev write read 8 blocks ...passed 00:07:13.864 Test: blockdev write read size > 128k ...passed 00:07:13.864 Test: blockdev write read invalid size ...passed 00:07:13.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.864 Test: blockdev write read max offset ...passed 00:07:13.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.864 Test: blockdev writev readv 8 blocks ...passed 00:07:13.864 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.864 Test: blockdev writev readv block ...passed 00:07:13.864 Test: blockdev writev readv size > 128k ...passed 00:07:13.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.864 Test: blockdev comparev and writev ...[2024-09-29 21:38:32.684772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfa36000 len:0x1000 00:07:13.864 [2024-09-29 21:38:32.684815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:07:13.864 Test: blockdev nvme passthru rw ...0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.864 passed 00:07:13.864 Test: blockdev nvme passthru vendor specific ...[2024-09-29 21:38:32.685544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.864 [2024-09-29 21:38:32.685571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.864 passed 00:07:13.864 Test: blockdev nvme admin passthru ...passed 00:07:13.864 Test: blockdev copy ...passed 00:07:13.864 Suite: bdevio tests on: Nvme2n1 00:07:13.864 Test: blockdev write read block ...passed 00:07:13.864 Test: blockdev write zeroes read block ...passed 00:07:13.864 Test: blockdev write zeroes read no split ...passed 00:07:13.864 Test: blockdev write zeroes read split ...passed 00:07:13.864 Test: blockdev write zeroes read split partial ...passed 00:07:13.864 Test: blockdev reset ...[2024-09-29 21:38:32.745925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:13.864 [2024-09-29 21:38:32.748674] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.864 passed 00:07:13.864 Test: blockdev write read 8 blocks ...passed 00:07:13.864 Test: blockdev write read size > 128k ...passed 00:07:13.864 Test: blockdev write read invalid size ...passed 00:07:13.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.864 Test: blockdev write read max offset ...passed 00:07:13.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.864 Test: blockdev writev readv 8 blocks ...passed 00:07:13.864 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.864 Test: blockdev writev readv block ...passed 00:07:13.864 Test: blockdev writev readv size > 128k ...passed 00:07:13.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.864 Test: blockdev comparev and writev ...[2024-09-29 21:38:32.755565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfa32000 len:0x1000 00:07:13.864 [2024-09-29 21:38:32.755608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.864 passed 00:07:13.864 Test: blockdev nvme passthru rw ...passed 00:07:13.864 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.864 Test: blockdev nvme admin passthru ...[2024-09-29 21:38:32.756223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.864 [2024-09-29 21:38:32.756247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.864 passed 00:07:13.864 Test: blockdev copy ...passed 00:07:13.864 Suite: bdevio tests on: Nvme1n1p2 00:07:13.864 Test: blockdev write read block ...passed 00:07:13.864 Test: blockdev write zeroes read block ...passed 00:07:13.864 Test: blockdev write zeroes read no split ...passed 00:07:13.864 Test: blockdev write zeroes read split ...passed 00:07:13.864 Test: blockdev write zeroes read split partial ...passed 00:07:13.864 Test: blockdev reset ...[2024-09-29 21:38:32.818860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:13.864 [2024-09-29 21:38:32.821364] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.864 passed 00:07:13.864 Test: blockdev write read 8 blocks ...passed 00:07:13.864 Test: blockdev write read size > 128k ...passed 00:07:13.864 Test: blockdev write read invalid size ...passed 00:07:13.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.865 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.865 Test: blockdev write read max offset ...passed 00:07:13.865 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.865 Test: blockdev writev readv 8 blocks ...passed 00:07:13.865 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.865 Test: blockdev writev readv block ...passed 00:07:13.865 Test: blockdev writev readv size > 128k ...passed 00:07:13.865 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.865 Test: blockdev comparev and writev ...[2024-09-29 21:38:32.829920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:07:13.865 Test: blockdev nvme passthru rw ...passed 00:07:13.865 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.865 Test: blockdev nvme admin passthru ...passed 00:07:13.865 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2bfa2e000 len:0x1000 00:07:13.865 [2024-09-29 21:38:32.830051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.865 passed 00:07:13.865 Suite: bdevio tests on: Nvme1n1p1 00:07:13.865 Test: blockdev write read block ...passed 00:07:13.865 Test: blockdev write zeroes read block ...passed 00:07:13.865 Test: blockdev write zeroes read no split ...passed 00:07:14.121 Test: blockdev write zeroes read split ...passed 00:07:14.121 Test: blockdev write zeroes read split partial ...passed 00:07:14.121 Test: blockdev reset ...[2024-09-29 21:38:32.877934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:14.121 [2024-09-29 21:38:32.880498] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.121 passed 00:07:14.121 Test: blockdev write read 8 blocks ...passed 00:07:14.121 Test: blockdev write read size > 128k ...passed 00:07:14.121 Test: blockdev write read invalid size ...passed 00:07:14.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.121 Test: blockdev write read max offset ...passed 00:07:14.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.121 Test: blockdev writev readv 8 blocks ...passed 00:07:14.121 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.121 Test: blockdev writev readv block ...passed 00:07:14.121 Test: blockdev writev readv size > 128k ...passed 00:07:14.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.121 Test: blockdev comparev and writev ...[2024-09-29 21:38:32.888073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x292c0e000 len:0x1000 00:07:14.121 [2024-09-29 21:38:32.888216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:14.121 passed 00:07:14.121 Test: blockdev nvme passthru rw ...passed 00:07:14.121 Test: blockdev nvme passthru vendor specific ...passed 00:07:14.121 Test: blockdev nvme admin passthru ...passed 00:07:14.121 Test: blockdev copy ...passed 00:07:14.121 Suite: bdevio tests on: Nvme0n1 00:07:14.121 Test: blockdev write read block ...passed 00:07:14.121 Test: blockdev write zeroes read block ...passed 00:07:14.121 Test: blockdev write zeroes read no split ...passed 00:07:14.121 Test: blockdev write zeroes read split ...passed 00:07:14.121 Test: blockdev write zeroes read split partial ...passed 00:07:14.121 Test: blockdev reset ...[2024-09-29 21:38:32.935227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:14.121 [2024-09-29 21:38:32.937763] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:14.121 passed 00:07:14.121 Test: blockdev write read 8 blocks ...passed 00:07:14.121 Test: blockdev write read size > 128k ...passed 00:07:14.121 Test: blockdev write read invalid size ...passed 00:07:14.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:14.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:14.121 Test: blockdev write read max offset ...passed 00:07:14.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:14.121 Test: blockdev writev readv 8 blocks ...passed 00:07:14.121 Test: blockdev writev readv 30 x 1block ...passed 00:07:14.121 Test: blockdev writev readv block ...passed 00:07:14.121 Test: blockdev writev readv size > 128k ...passed 00:07:14.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:14.121 Test: blockdev comparev and writev ...[2024-09-29 21:38:32.944240] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:14.121 separate metadata which is not supported yet. 00:07:14.121 passed 00:07:14.121 Test: blockdev nvme passthru rw ...passed 00:07:14.121 Test: blockdev nvme passthru vendor specific ...[2024-09-29 21:38:32.944780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:14.121 [2024-09-29 21:38:32.944901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:07:14.121 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:07:14.121 passed 00:07:14.121 Test: blockdev copy ...passed 00:07:14.121 00:07:14.121 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.121 suites 7 7 n/a 0 0 00:07:14.121 tests 161 161 161 0 0 00:07:14.121 asserts 1025 1025 1025 0 n/a 00:07:14.121 00:07:14.121 Elapsed time = 1.233 seconds 00:07:14.121 0 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61786 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61786 ']' 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61786 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61786 00:07:14.121 killing process with pid 61786 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61786' 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61786 00:07:14.121 21:38:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61786 00:07:14.685 ************************************ 00:07:14.685 END TEST bdev_bounds 00:07:14.685 ************************************ 00:07:14.685 21:38:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:14.685 00:07:14.685 real 0m2.167s 00:07:14.685 user 0m5.328s 00:07:14.685 sys 0m0.266s 00:07:14.685 21:38:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.685 21:38:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:14.942 21:38:33 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:14.942 21:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:14.942 21:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.942 21:38:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.942 ************************************ 00:07:14.942 START TEST bdev_nbd 00:07:14.942 ************************************ 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:14.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:14.942 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61841 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61841 /var/tmp/spdk-nbd.sock 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61841 ']' 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.943 21:38:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:14.943 [2024-09-29 21:38:33.785721] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:14.943 [2024-09-29 21:38:33.785877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.200 [2024-09-29 21:38:33.947637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.200 [2024-09-29 21:38:34.130662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:15.765 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.022 1+0 records in 00:07:16.022 1+0 records out 00:07:16.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532818 s, 7.7 MB/s 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.022 21:38:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.280 1+0 records in 00:07:16.280 1+0 records out 00:07:16.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354625 s, 11.6 MB/s 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.280 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.538 1+0 records in 00:07:16.538 1+0 records out 00:07:16.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128963 s, 3.2 MB/s 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.538 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.796 1+0 records in 00:07:16.796 1+0 records out 00:07:16.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00155219 s, 2.6 MB/s 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:16.796 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.053 1+0 records in 00:07:17.053 1+0 records out 00:07:17.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397612 s, 10.3 MB/s 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.053 21:38:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.311 1+0 records in 00:07:17.311 1+0 records out 00:07:17.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460406 s, 8.9 MB/s 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.311 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.569 1+0 records in 00:07:17.569 1+0 records out 00:07:17.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674674 s, 6.1 MB/s 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:17.569 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.826 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd0", 00:07:17.826 "bdev_name": "Nvme0n1" 00:07:17.826 }, 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd1", 00:07:17.826 "bdev_name": "Nvme1n1p1" 00:07:17.826 }, 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd2", 00:07:17.826 "bdev_name": "Nvme1n1p2" 00:07:17.826 }, 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd3", 00:07:17.826 "bdev_name": "Nvme2n1" 00:07:17.826 }, 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd4", 00:07:17.826 "bdev_name": "Nvme2n2" 00:07:17.826 }, 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd5", 00:07:17.826 "bdev_name": "Nvme2n3" 00:07:17.826 }, 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd6", 00:07:17.826 "bdev_name": "Nvme3n1" 00:07:17.826 } 00:07:17.826 ]' 00:07:17.826 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:17.826 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:17.826 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd0", 00:07:17.826 "bdev_name": "Nvme0n1" 00:07:17.826 }, 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd1", 00:07:17.826 "bdev_name": "Nvme1n1p1" 00:07:17.826 }, 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd2", 00:07:17.826 "bdev_name": "Nvme1n1p2" 00:07:17.826 }, 00:07:17.826 { 00:07:17.826 "nbd_device": "/dev/nbd3", 00:07:17.827 "bdev_name": "Nvme2n1" 00:07:17.827 }, 00:07:17.827 { 00:07:17.827 "nbd_device": "/dev/nbd4", 00:07:17.827 "bdev_name": "Nvme2n2" 00:07:17.827 }, 00:07:17.827 { 00:07:17.827 "nbd_device": "/dev/nbd5", 00:07:17.827 "bdev_name": "Nvme2n3" 00:07:17.827 }, 00:07:17.827 { 00:07:17.827 "nbd_device": "/dev/nbd6", 00:07:17.827 "bdev_name": "Nvme3n1" 00:07:17.827 } 00:07:17.827 ]' 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.827 21:38:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.088 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.353 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.610 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.867 21:38:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.124 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.382 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:19.640 /dev/nbd0 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.640 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.641 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.641 1+0 records in 00:07:19.641 1+0 records out 00:07:19.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372715 s, 11.0 MB/s 00:07:19.641 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.641 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:19.641 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.641 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.641 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:19.641 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.641 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.641 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:19.899 /dev/nbd1 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.899 1+0 records in 00:07:19.899 1+0 records out 00:07:19.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609665 s, 6.7 MB/s 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:19.899 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:20.158 /dev/nbd10 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.158 1+0 records in 00:07:20.158 1+0 records out 00:07:20.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383865 s, 10.7 MB/s 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.158 21:38:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:20.416 /dev/nbd11 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.416 1+0 records in 00:07:20.416 1+0 records out 00:07:20.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384424 s, 10.7 MB/s 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.416 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:20.674 /dev/nbd12 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.674 1+0 records in 00:07:20.674 1+0 records out 00:07:20.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380043 s, 10.8 MB/s 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.674 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:20.674 /dev/nbd13 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.933 1+0 records in 00:07:20.933 1+0 records out 00:07:20.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035154 s, 11.7 MB/s 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:20.933 /dev/nbd14 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.933 1+0 records in 00:07:20.933 1+0 records out 00:07:20.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011593 s, 3.5 MB/s 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.933 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.192 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.192 21:38:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:21.192 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:21.192 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:21.192 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:21.192 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.192 21:38:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd0", 00:07:21.192 "bdev_name": "Nvme0n1" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd1", 00:07:21.192 "bdev_name": "Nvme1n1p1" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd10", 00:07:21.192 "bdev_name": "Nvme1n1p2" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd11", 00:07:21.192 "bdev_name": "Nvme2n1" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd12", 00:07:21.192 "bdev_name": "Nvme2n2" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd13", 00:07:21.192 "bdev_name": "Nvme2n3" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd14", 00:07:21.192 "bdev_name": "Nvme3n1" 00:07:21.192 } 00:07:21.192 ]' 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd0", 00:07:21.192 "bdev_name": "Nvme0n1" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd1", 00:07:21.192 "bdev_name": "Nvme1n1p1" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd10", 00:07:21.192 "bdev_name": "Nvme1n1p2" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd11", 00:07:21.192 "bdev_name": "Nvme2n1" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd12", 00:07:21.192 "bdev_name": "Nvme2n2" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd13", 00:07:21.192 "bdev_name": "Nvme2n3" 00:07:21.192 }, 00:07:21.192 { 00:07:21.192 "nbd_device": "/dev/nbd14", 00:07:21.192 "bdev_name": "Nvme3n1" 00:07:21.192 } 00:07:21.192 ]' 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:21.192 /dev/nbd1 00:07:21.192 /dev/nbd10 00:07:21.192 /dev/nbd11 00:07:21.192 /dev/nbd12 00:07:21.192 /dev/nbd13 00:07:21.192 /dev/nbd14' 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:21.192 /dev/nbd1 00:07:21.192 /dev/nbd10 00:07:21.192 /dev/nbd11 00:07:21.192 /dev/nbd12 00:07:21.192 /dev/nbd13 00:07:21.192 /dev/nbd14' 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:21.192 256+0 records in 00:07:21.192 256+0 records out 00:07:21.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00691281 s, 152 MB/s 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.192 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:21.450 256+0 records in 00:07:21.450 256+0 records out 00:07:21.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160835 s, 6.5 MB/s 00:07:21.450 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.450 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:21.708 256+0 records in 00:07:21.708 256+0 records out 00:07:21.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219648 s, 4.8 MB/s 00:07:21.708 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.708 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:21.966 256+0 records in 00:07:21.966 256+0 records out 00:07:21.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.262057 s, 4.0 MB/s 00:07:21.966 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:21.966 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:22.224 256+0 records in 00:07:22.224 256+0 records out 00:07:22.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168951 s, 6.2 MB/s 00:07:22.224 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.224 21:38:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:22.224 256+0 records in 00:07:22.224 256+0 records out 00:07:22.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134586 s, 7.8 MB/s 00:07:22.224 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.224 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:22.482 256+0 records in 00:07:22.482 256+0 records out 00:07:22.482 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0726773 s, 14.4 MB/s 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:22.483 256+0 records in 00:07:22.483 256+0 records out 00:07:22.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0705353 s, 14.9 MB/s 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.483 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.741 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:23.000 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:23.001 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:23.001 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.001 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.001 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:23.001 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.001 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.001 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.001 21:38:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.261 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.519 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.778 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.036 21:38:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:24.296 malloc_lvol_verify 00:07:24.296 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:24.557 2697ea17-7056-43cf-bb51-e16c2508f746 00:07:24.557 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:24.818 f20275dd-deaa-4aa2-b362-34dd104672be 00:07:24.818 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:25.079 /dev/nbd0 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:25.079 mke2fs 1.47.0 (5-Feb-2023) 00:07:25.079 Discarding device blocks: 0/4096 done 00:07:25.079 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:25.079 00:07:25.079 Allocating group tables: 0/1 done 00:07:25.079 Writing inode tables: 0/1 done 00:07:25.079 Creating journal (1024 blocks): done 00:07:25.079 Writing superblocks and filesystem accounting information: 0/1 done 00:07:25.079 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.079 21:38:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.079 21:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.079 21:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.079 21:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.079 21:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.079 21:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.079 21:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61841 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61841 ']' 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61841 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61841 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61841' 00:07:25.340 killing process with pid 61841 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61841 00:07:25.340 21:38:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61841 00:07:26.275 ************************************ 00:07:26.275 END TEST bdev_nbd 00:07:26.275 ************************************ 00:07:26.275 21:38:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:26.275 00:07:26.275 real 0m11.318s 00:07:26.275 user 0m15.574s 00:07:26.275 sys 0m3.689s 00:07:26.275 21:38:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.275 21:38:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:26.275 21:38:45 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:26.275 21:38:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:26.275 21:38:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:26.275 skipping fio tests on NVMe due to multi-ns failures. 00:07:26.275 21:38:45 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:26.275 21:38:45 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:26.275 21:38:45 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:26.275 21:38:45 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:26.275 21:38:45 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.275 21:38:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.275 ************************************ 00:07:26.275 START TEST bdev_verify 00:07:26.275 ************************************ 00:07:26.275 21:38:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:26.275 [2024-09-29 21:38:45.128894] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:26.275 [2024-09-29 21:38:45.129018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62258 ] 00:07:26.534 [2024-09-29 21:38:45.278765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.534 [2024-09-29 21:38:45.486548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.534 [2024-09-29 21:38:45.486644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.467 Running I/O for 5 seconds... 00:07:32.599 22592.00 IOPS, 88.25 MiB/s 22528.00 IOPS, 88.00 MiB/s 22528.00 IOPS, 88.00 MiB/s 21392.00 IOPS, 83.56 MiB/s 21030.40 IOPS, 82.15 MiB/s 00:07:32.599 Latency(us) 00:07:32.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.599 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x0 length 0xbd0bd 00:07:32.599 Nvme0n1 : 5.07 1490.90 5.82 0.00 0.00 85606.23 17543.48 76223.41 00:07:32.599 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:32.599 Nvme0n1 : 5.06 1467.54 5.73 0.00 0.00 86934.83 15325.34 83482.78 00:07:32.599 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x0 length 0x4ff80 00:07:32.599 Nvme1n1p1 : 5.07 1490.43 5.82 0.00 0.00 85521.07 19459.15 74206.92 00:07:32.599 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:32.599 Nvme1n1p1 : 5.06 1467.04 5.73 0.00 0.00 86789.28 18854.20 73803.62 00:07:32.599 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x0 length 0x4ff7f 00:07:32.599 Nvme1n1p2 : 5.07 1489.93 5.82 0.00 0.00 85381.04 20870.70 70980.53 00:07:32.599 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:32.599 Nvme1n1p2 : 5.06 1466.58 5.73 0.00 0.00 86676.55 20265.75 71787.13 00:07:32.599 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x0 length 0x80000 00:07:32.599 Nvme2n1 : 5.07 1489.51 5.82 0.00 0.00 85272.08 22282.24 66947.54 00:07:32.599 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x80000 length 0x80000 00:07:32.599 Nvme2n1 : 5.06 1466.17 5.73 0.00 0.00 86491.24 21576.47 70577.23 00:07:32.599 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x0 length 0x80000 00:07:32.599 Nvme2n2 : 5.07 1489.10 5.82 0.00 0.00 85129.29 21979.77 67754.14 00:07:32.599 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x80000 length 0x80000 00:07:32.599 Nvme2n2 : 5.08 1474.61 5.76 0.00 0.00 85795.14 4411.08 70577.23 00:07:32.599 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x0 length 0x80000 00:07:32.599 Nvme2n3 : 5.07 1488.66 5.82 0.00 0.00 84996.03 17543.48 70980.53 00:07:32.599 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x80000 length 0x80000 00:07:32.599 Nvme2n3 : 5.09 1482.68 5.79 0.00 0.00 85201.46 11695.66 74610.22 00:07:32.599 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x0 length 0x20000 00:07:32.599 Nvme3n1 : 5.08 1498.27 5.85 0.00 0.00 84325.80 3075.15 74610.22 00:07:32.599 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:32.599 Verification LBA range: start 0x20000 length 0x20000 00:07:32.599 Nvme3n1 : 5.09 1482.24 5.79 0.00 0.00 85102.56 10384.94 78643.20 00:07:32.599 =================================================================================================================== 00:07:32.599 Total : 20743.64 81.03 0.00 0.00 85652.69 3075.15 83482.78 00:07:33.980 00:07:33.980 real 0m7.467s 00:07:33.980 user 0m13.769s 00:07:33.980 sys 0m0.251s 00:07:33.980 21:38:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.980 21:38:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:33.980 ************************************ 00:07:33.980 END TEST bdev_verify 00:07:33.980 ************************************ 00:07:33.980 21:38:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.980 21:38:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:33.980 21:38:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.980 21:38:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.980 ************************************ 00:07:33.980 START TEST bdev_verify_big_io 00:07:33.980 ************************************ 00:07:33.980 21:38:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:33.980 [2024-09-29 21:38:52.641621] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:33.980 [2024-09-29 21:38:52.641738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62352 ] 00:07:33.980 [2024-09-29 21:38:52.791869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.240 [2024-09-29 21:38:52.977138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.240 [2024-09-29 21:38:52.977218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.807 Running I/O for 5 seconds... 00:07:41.071 2030.00 IOPS, 126.88 MiB/s 3865.50 IOPS, 241.59 MiB/s 00:07:41.071 Latency(us) 00:07:41.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.071 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x0 length 0xbd0b 00:07:41.071 Nvme0n1 : 5.80 105.69 6.61 0.00 0.00 1148003.54 16736.89 1367988.38 00:07:41.071 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:41.071 Nvme0n1 : 6.20 90.39 5.65 0.00 0.00 1143457.18 35288.62 2116510.33 00:07:41.071 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x0 length 0x4ff8 00:07:41.071 Nvme1n1p1 : 5.80 110.27 6.89 0.00 0.00 1079053.08 99211.42 1161499.57 00:07:41.071 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:41.071 Nvme1n1p1 : 6.35 148.40 9.27 0.00 0.00 671924.56 686.87 2942465.58 00:07:41.071 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x0 length 0x4ff7 00:07:41.071 Nvme1n1p2 : 5.81 110.23 6.89 0.00 0.00 1039105.42 144380.85 1038896.84 00:07:41.071 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:41.071 Nvme1n1p2 : 6.12 79.89 4.99 0.00 0.00 1526183.03 13208.02 1497043.89 00:07:41.071 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x0 length 0x8000 00:07:41.071 Nvme2n1 : 5.98 117.30 7.33 0.00 0.00 947143.56 72997.02 1058255.16 00:07:41.071 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x8000 length 0x8000 00:07:41.071 Nvme2n1 : 6.04 75.38 4.71 0.00 0.00 1556673.56 157286.40 1935832.62 00:07:41.071 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x0 length 0x8000 00:07:41.071 Nvme2n2 : 6.11 124.83 7.80 0.00 0.00 860570.20 72593.72 1058255.16 00:07:41.071 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x8000 length 0x8000 00:07:41.071 Nvme2n2 : 6.12 75.27 4.70 0.00 0.00 1534782.04 85902.57 2348810.24 00:07:41.071 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x0 length 0x8000 00:07:41.071 Nvme2n3 : 6.16 135.06 8.44 0.00 0.00 774092.13 23693.78 1064707.94 00:07:41.071 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x8000 length 0x8000 00:07:41.071 Nvme2n3 : 6.13 79.17 4.95 0.00 0.00 1406315.04 85902.57 2387526.89 00:07:41.071 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x0 length 0x2000 00:07:41.071 Nvme3n1 : 6.22 154.35 9.65 0.00 0.00 658383.44 570.29 1084066.26 00:07:41.071 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:41.071 Verification LBA range: start 0x2000 length 0x2000 00:07:41.071 Nvme3n1 : 6.19 80.73 5.05 0.00 0.00 1339250.34 29844.09 2439149.10 00:07:41.071 =================================================================================================================== 00:07:41.071 Total : 1486.94 92.93 0.00 0.00 1044423.70 570.29 2942465.58 00:07:42.980 00:07:42.980 real 0m9.014s 00:07:42.980 user 0m16.885s 00:07:42.980 sys 0m0.240s 00:07:42.980 21:39:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.980 ************************************ 00:07:42.980 END TEST bdev_verify_big_io 00:07:42.980 ************************************ 00:07:42.980 21:39:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:42.980 21:39:01 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.980 21:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:42.980 21:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.980 21:39:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.980 ************************************ 00:07:42.980 START TEST bdev_write_zeroes 00:07:42.980 ************************************ 00:07:42.980 21:39:01 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.980 [2024-09-29 21:39:01.709432] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:42.980 [2024-09-29 21:39:01.709548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62472 ] 00:07:42.980 [2024-09-29 21:39:01.859752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.240 [2024-09-29 21:39:02.050836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.809 Running I/O for 1 seconds... 00:07:44.749 54480.00 IOPS, 212.81 MiB/s 00:07:44.749 Latency(us) 00:07:44.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.749 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.749 Nvme0n1 : 1.03 7630.93 29.81 0.00 0.00 16734.00 6427.57 184710.70 00:07:44.749 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.749 Nvme1n1p1 : 1.03 7792.75 30.44 0.00 0.00 16361.96 11141.12 122602.73 00:07:44.749 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.749 Nvme1n1p2 : 1.03 7783.18 30.40 0.00 0.00 16267.78 9628.75 123409.33 00:07:44.749 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.749 Nvme2n1 : 1.03 7774.34 30.37 0.00 0.00 16247.69 8570.09 122602.73 00:07:44.749 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.749 Nvme2n2 : 1.03 7827.58 30.58 0.00 0.00 16097.26 7662.67 122602.73 00:07:44.749 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.749 Nvme2n3 : 1.03 7756.68 30.30 0.00 0.00 16210.91 6856.07 158899.59 00:07:44.749 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:44.749 Nvme3n1 : 1.03 7685.85 30.02 0.00 0.00 16332.05 10687.41 159706.19 00:07:44.749 =================================================================================================================== 00:07:44.749 Total : 54251.32 211.92 0.00 0.00 16320.10 6427.57 184710.70 00:07:45.712 00:07:45.712 real 0m2.872s 00:07:45.712 user 0m2.564s 00:07:45.712 sys 0m0.191s 00:07:45.712 21:39:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.712 ************************************ 00:07:45.712 END TEST bdev_write_zeroes 00:07:45.712 ************************************ 00:07:45.712 21:39:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:45.712 21:39:04 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.712 21:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:45.712 21:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.712 21:39:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.712 ************************************ 00:07:45.712 START TEST bdev_json_nonenclosed 00:07:45.712 ************************************ 00:07:45.712 21:39:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.712 [2024-09-29 21:39:04.649611] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:45.712 [2024-09-29 21:39:04.649734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62528 ] 00:07:45.973 [2024-09-29 21:39:04.800310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.234 [2024-09-29 21:39:05.046423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.234 [2024-09-29 21:39:05.046532] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:46.234 [2024-09-29 21:39:05.046552] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:46.234 [2024-09-29 21:39:05.046562] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.496 00:07:46.496 real 0m0.793s 00:07:46.496 user 0m0.560s 00:07:46.496 sys 0m0.124s 00:07:46.496 21:39:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.496 21:39:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:46.496 ************************************ 00:07:46.496 END TEST bdev_json_nonenclosed 00:07:46.496 ************************************ 00:07:46.496 21:39:05 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.496 21:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:46.496 21:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.496 21:39:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.496 ************************************ 00:07:46.496 START TEST bdev_json_nonarray 00:07:46.496 ************************************ 00:07:46.496 21:39:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.756 [2024-09-29 21:39:05.508804] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:46.757 [2024-09-29 21:39:05.508954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62559 ] 00:07:46.757 [2024-09-29 21:39:05.662585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.018 [2024-09-29 21:39:05.893160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.018 [2024-09-29 21:39:05.893280] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:47.018 [2024-09-29 21:39:05.893301] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:47.018 [2024-09-29 21:39:05.893317] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.280 00:07:47.280 real 0m0.777s 00:07:47.280 user 0m0.537s 00:07:47.280 sys 0m0.130s 00:07:47.280 21:39:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.280 ************************************ 00:07:47.280 END TEST bdev_json_nonarray 00:07:47.280 21:39:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:47.280 ************************************ 00:07:47.541 21:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:47.541 21:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:47.541 21:39:06 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:47.541 21:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.541 21:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.541 21:39:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.541 ************************************ 00:07:47.541 START TEST bdev_gpt_uuid 00:07:47.541 ************************************ 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62585 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62585 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:47.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62585 ']' 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.541 21:39:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.541 [2024-09-29 21:39:06.377407] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:47.541 [2024-09-29 21:39:06.377563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62585 ] 00:07:47.802 [2024-09-29 21:39:06.524887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.802 [2024-09-29 21:39:06.762743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.745 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.745 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:48.745 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:48.745 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.745 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:49.007 Some configs were skipped because the RPC state that can call them passed over. 00:07:49.007 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.007 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:49.007 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.007 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:49.007 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:49.008 { 00:07:49.008 "name": "Nvme1n1p1", 00:07:49.008 "aliases": [ 00:07:49.008 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:49.008 ], 00:07:49.008 "product_name": "GPT Disk", 00:07:49.008 "block_size": 4096, 00:07:49.008 "num_blocks": 655104, 00:07:49.008 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:49.008 "assigned_rate_limits": { 00:07:49.008 "rw_ios_per_sec": 0, 00:07:49.008 "rw_mbytes_per_sec": 0, 00:07:49.008 "r_mbytes_per_sec": 0, 00:07:49.008 "w_mbytes_per_sec": 0 00:07:49.008 }, 00:07:49.008 "claimed": false, 00:07:49.008 "zoned": false, 00:07:49.008 "supported_io_types": { 00:07:49.008 "read": true, 00:07:49.008 "write": true, 00:07:49.008 "unmap": true, 00:07:49.008 "flush": true, 00:07:49.008 "reset": true, 00:07:49.008 "nvme_admin": false, 00:07:49.008 "nvme_io": false, 00:07:49.008 "nvme_io_md": false, 00:07:49.008 "write_zeroes": true, 00:07:49.008 "zcopy": false, 00:07:49.008 "get_zone_info": false, 00:07:49.008 "zone_management": false, 00:07:49.008 "zone_append": false, 00:07:49.008 "compare": true, 00:07:49.008 "compare_and_write": false, 00:07:49.008 "abort": true, 00:07:49.008 "seek_hole": false, 00:07:49.008 "seek_data": false, 00:07:49.008 "copy": true, 00:07:49.008 "nvme_iov_md": false 00:07:49.008 }, 00:07:49.008 "driver_specific": { 00:07:49.008 "gpt": { 00:07:49.008 "base_bdev": "Nvme1n1", 00:07:49.008 "offset_blocks": 256, 00:07:49.008 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:49.008 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:49.008 "partition_name": "SPDK_TEST_first" 00:07:49.008 } 00:07:49.008 } 00:07:49.008 } 00:07:49.008 ]' 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:49.008 { 00:07:49.008 "name": "Nvme1n1p2", 00:07:49.008 "aliases": [ 00:07:49.008 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:49.008 ], 00:07:49.008 "product_name": "GPT Disk", 00:07:49.008 "block_size": 4096, 00:07:49.008 "num_blocks": 655103, 00:07:49.008 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:49.008 "assigned_rate_limits": { 00:07:49.008 "rw_ios_per_sec": 0, 00:07:49.008 "rw_mbytes_per_sec": 0, 00:07:49.008 "r_mbytes_per_sec": 0, 00:07:49.008 "w_mbytes_per_sec": 0 00:07:49.008 }, 00:07:49.008 "claimed": false, 00:07:49.008 "zoned": false, 00:07:49.008 "supported_io_types": { 00:07:49.008 "read": true, 00:07:49.008 "write": true, 00:07:49.008 "unmap": true, 00:07:49.008 "flush": true, 00:07:49.008 "reset": true, 00:07:49.008 "nvme_admin": false, 00:07:49.008 "nvme_io": false, 00:07:49.008 "nvme_io_md": false, 00:07:49.008 "write_zeroes": true, 00:07:49.008 "zcopy": false, 00:07:49.008 "get_zone_info": false, 00:07:49.008 "zone_management": false, 00:07:49.008 "zone_append": false, 00:07:49.008 "compare": true, 00:07:49.008 "compare_and_write": false, 00:07:49.008 "abort": true, 00:07:49.008 "seek_hole": false, 00:07:49.008 "seek_data": false, 00:07:49.008 "copy": true, 00:07:49.008 "nvme_iov_md": false 00:07:49.008 }, 00:07:49.008 "driver_specific": { 00:07:49.008 "gpt": { 00:07:49.008 "base_bdev": "Nvme1n1", 00:07:49.008 "offset_blocks": 655360, 00:07:49.008 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:49.008 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:49.008 "partition_name": "SPDK_TEST_second" 00:07:49.008 } 00:07:49.008 } 00:07:49.008 } 00:07:49.008 ]' 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:49.008 21:39:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62585 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62585 ']' 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62585 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62585 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.270 killing process with pid 62585 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62585' 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62585 00:07:49.270 21:39:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62585 00:07:51.189 00:07:51.189 real 0m3.465s 00:07:51.189 user 0m3.497s 00:07:51.189 sys 0m0.522s 00:07:51.189 21:39:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.189 ************************************ 00:07:51.189 END TEST bdev_gpt_uuid 00:07:51.189 ************************************ 00:07:51.189 21:39:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:51.189 21:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:51.189 21:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:51.189 21:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:51.189 21:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:51.189 21:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:51.189 21:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:51.189 21:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:51.189 21:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:51.189 21:39:09 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:51.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:51.450 Waiting for block devices as requested 00:07:51.450 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:51.450 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:51.450 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:51.712 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:57.005 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:57.005 21:39:15 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:57.005 21:39:15 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:57.005 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:57.005 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:57.005 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:57.005 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:57.005 21:39:15 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:57.005 00:07:57.005 real 0m57.675s 00:07:57.005 user 1m12.671s 00:07:57.005 sys 0m8.071s 00:07:57.005 21:39:15 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.005 21:39:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.005 ************************************ 00:07:57.005 END TEST blockdev_nvme_gpt 00:07:57.005 ************************************ 00:07:57.005 21:39:15 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:57.005 21:39:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.005 21:39:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.005 21:39:15 -- common/autotest_common.sh@10 -- # set +x 00:07:57.005 ************************************ 00:07:57.005 START TEST nvme 00:07:57.005 ************************************ 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:57.005 * Looking for test storage... 00:07:57.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:57.005 21:39:15 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.005 21:39:15 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.005 21:39:15 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.005 21:39:15 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.005 21:39:15 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.005 21:39:15 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.005 21:39:15 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.005 21:39:15 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.005 21:39:15 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.005 21:39:15 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.005 21:39:15 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.005 21:39:15 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:57.005 21:39:15 nvme -- scripts/common.sh@345 -- # : 1 00:07:57.005 21:39:15 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.005 21:39:15 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.005 21:39:15 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:57.005 21:39:15 nvme -- scripts/common.sh@353 -- # local d=1 00:07:57.005 21:39:15 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.005 21:39:15 nvme -- scripts/common.sh@355 -- # echo 1 00:07:57.005 21:39:15 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.005 21:39:15 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:57.005 21:39:15 nvme -- scripts/common.sh@353 -- # local d=2 00:07:57.005 21:39:15 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.005 21:39:15 nvme -- scripts/common.sh@355 -- # echo 2 00:07:57.005 21:39:15 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.005 21:39:15 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.005 21:39:15 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.005 21:39:15 nvme -- scripts/common.sh@368 -- # return 0 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:57.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.005 --rc genhtml_branch_coverage=1 00:07:57.005 --rc genhtml_function_coverage=1 00:07:57.005 --rc genhtml_legend=1 00:07:57.005 --rc geninfo_all_blocks=1 00:07:57.005 --rc geninfo_unexecuted_blocks=1 00:07:57.005 00:07:57.005 ' 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:57.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.005 --rc genhtml_branch_coverage=1 00:07:57.005 --rc genhtml_function_coverage=1 00:07:57.005 --rc genhtml_legend=1 00:07:57.005 --rc geninfo_all_blocks=1 00:07:57.005 --rc geninfo_unexecuted_blocks=1 00:07:57.005 00:07:57.005 ' 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:57.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.005 --rc genhtml_branch_coverage=1 00:07:57.005 --rc genhtml_function_coverage=1 00:07:57.005 --rc genhtml_legend=1 00:07:57.005 --rc geninfo_all_blocks=1 00:07:57.005 --rc geninfo_unexecuted_blocks=1 00:07:57.005 00:07:57.005 ' 00:07:57.005 21:39:15 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:57.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.005 --rc genhtml_branch_coverage=1 00:07:57.005 --rc genhtml_function_coverage=1 00:07:57.005 --rc genhtml_legend=1 00:07:57.005 --rc geninfo_all_blocks=1 00:07:57.005 --rc geninfo_unexecuted_blocks=1 00:07:57.005 00:07:57.005 ' 00:07:57.005 21:39:15 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:57.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:57.838 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.838 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:57.838 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.098 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:58.098 21:39:16 nvme -- nvme/nvme.sh@79 -- # uname 00:07:58.098 21:39:16 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:58.098 21:39:16 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:58.098 21:39:16 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:58.098 21:39:16 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:58.098 21:39:16 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:58.098 21:39:16 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:58.098 21:39:16 nvme -- common/autotest_common.sh@1071 -- # stubpid=63225 00:07:58.098 21:39:16 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:58.098 Waiting for stub to ready for secondary processes... 00:07:58.098 21:39:16 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:58.098 21:39:16 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:58.098 21:39:16 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63225 ]] 00:07:58.098 21:39:16 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:58.098 [2024-09-29 21:39:16.927248] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:58.098 [2024-09-29 21:39:16.927361] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:59.039 [2024-09-29 21:39:17.662315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.039 [2024-09-29 21:39:17.834715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.039 [2024-09-29 21:39:17.834787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.039 [2024-09-29 21:39:17.834978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.039 [2024-09-29 21:39:17.848516] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:59.039 [2024-09-29 21:39:17.848558] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:59.039 [2024-09-29 21:39:17.858134] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:59.039 [2024-09-29 21:39:17.858217] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:59.039 [2024-09-29 21:39:17.859791] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:59.039 [2024-09-29 21:39:17.860341] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:59.039 [2024-09-29 21:39:17.860450] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:59.039 [2024-09-29 21:39:17.863605] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:59.039 [2024-09-29 21:39:17.863876] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:59.039 [2024-09-29 21:39:17.863968] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:59.039 [2024-09-29 21:39:17.867279] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:59.039 [2024-09-29 21:39:17.867571] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:59.039 [2024-09-29 21:39:17.867671] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:59.039 [2024-09-29 21:39:17.867732] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:59.039 [2024-09-29 21:39:17.867790] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:59.039 21:39:17 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:59.039 done. 00:07:59.039 21:39:17 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:59.039 21:39:17 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:59.039 21:39:17 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:59.039 21:39:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.039 21:39:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.039 ************************************ 00:07:59.039 START TEST nvme_reset 00:07:59.039 ************************************ 00:07:59.039 21:39:17 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:59.301 Initializing NVMe Controllers 00:07:59.301 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:59.301 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:59.301 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:59.301 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:59.301 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:59.301 00:07:59.301 real 0m0.199s 00:07:59.301 user 0m0.057s 00:07:59.301 sys 0m0.093s 00:07:59.301 21:39:18 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.301 21:39:18 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:59.301 ************************************ 00:07:59.301 END TEST nvme_reset 00:07:59.301 ************************************ 00:07:59.301 21:39:18 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:59.301 21:39:18 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.301 21:39:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.301 21:39:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.301 ************************************ 00:07:59.301 START TEST nvme_identify 00:07:59.301 ************************************ 00:07:59.301 21:39:18 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:59.301 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:59.301 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:59.301 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:59.301 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:59.301 21:39:18 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:59.301 21:39:18 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:59.301 21:39:18 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:59.301 21:39:18 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:59.301 21:39:18 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:59.301 21:39:18 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:59.301 21:39:18 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:59.301 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:59.608 [2024-09-29 21:39:18.362452] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 63247 terminated unexpected 00:07:59.608 ===================================================== 00:07:59.608 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:59.608 ===================================================== 00:07:59.608 Controller Capabilities/Features 00:07:59.609 ================================ 00:07:59.609 Vendor ID: 1b36 00:07:59.609 Subsystem Vendor ID: 1af4 00:07:59.609 Serial Number: 12340 00:07:59.609 Model Number: QEMU NVMe Ctrl 00:07:59.609 Firmware Version: 8.0.0 00:07:59.609 Recommended Arb Burst: 6 00:07:59.609 IEEE OUI Identifier: 00 54 52 00:07:59.609 Multi-path I/O 00:07:59.609 May have multiple subsystem ports: No 00:07:59.609 May have multiple controllers: No 00:07:59.609 Associated with SR-IOV VF: No 00:07:59.609 Max Data Transfer Size: 524288 00:07:59.609 Max Number of Namespaces: 256 00:07:59.609 Max Number of I/O Queues: 64 00:07:59.609 NVMe Specification Version (VS): 1.4 00:07:59.609 NVMe Specification Version (Identify): 1.4 00:07:59.609 Maximum Queue Entries: 2048 00:07:59.609 Contiguous Queues Required: Yes 00:07:59.609 Arbitration Mechanisms Supported 00:07:59.609 Weighted Round Robin: Not Supported 00:07:59.609 Vendor Specific: Not Supported 00:07:59.609 Reset Timeout: 7500 ms 00:07:59.609 Doorbell Stride: 4 bytes 00:07:59.609 NVM Subsystem Reset: Not Supported 00:07:59.609 Command Sets Supported 00:07:59.609 NVM Command Set: Supported 00:07:59.609 Boot Partition: Not Supported 00:07:59.609 Memory Page Size Minimum: 4096 bytes 00:07:59.609 Memory Page Size Maximum: 65536 bytes 00:07:59.609 Persistent Memory Region: Not Supported 00:07:59.609 Optional Asynchronous Events Supported 00:07:59.609 Namespace Attribute Notices: Supported 00:07:59.609 Firmware Activation Notices: Not Supported 00:07:59.609 ANA Change Notices: Not Supported 00:07:59.609 PLE Aggregate Log Change Notices: Not Supported 00:07:59.609 LBA Status Info Alert Notices: Not Supported 00:07:59.609 EGE Aggregate Log Change Notices: Not Supported 00:07:59.609 Normal NVM Subsystem Shutdown event: Not Supported 00:07:59.609 Zone Descriptor Change Notices: Not Supported 00:07:59.609 Discovery Log Change Notices: Not Supported 00:07:59.609 Controller Attributes 00:07:59.609 128-bit Host Identifier: Not Supported 00:07:59.609 Non-Operational Permissive Mode: Not Supported 00:07:59.609 NVM Sets: Not Supported 00:07:59.609 Read Recovery Levels: Not Supported 00:07:59.609 Endurance Groups: Not Supported 00:07:59.609 Predictable Latency Mode: Not Supported 00:07:59.609 Traffic Based Keep ALive: Not Supported 00:07:59.609 Namespace Granularity: Not Supported 00:07:59.609 SQ Associations: Not Supported 00:07:59.609 UUID List: Not Supported 00:07:59.609 Multi-Domain Subsystem: Not Supported 00:07:59.609 Fixed Capacity Management: Not Supported 00:07:59.609 Variable Capacity Management: Not Supported 00:07:59.609 Delete Endurance Group: Not Supported 00:07:59.609 Delete NVM Set: Not Supported 00:07:59.609 Extended LBA Formats Supported: Supported 00:07:59.609 Flexible Data Placement Supported: Not Supported 00:07:59.609 00:07:59.609 Controller Memory Buffer Support 00:07:59.609 ================================ 00:07:59.609 Supported: No 00:07:59.609 00:07:59.609 Persistent Memory Region Support 00:07:59.609 ================================ 00:07:59.609 Supported: No 00:07:59.609 00:07:59.609 Admin Command Set Attributes 00:07:59.609 ============================ 00:07:59.609 Security Send/Receive: Not Supported 00:07:59.609 Format NVM: Supported 00:07:59.609 Firmware Activate/Download: Not Supported 00:07:59.609 Namespace Management: Supported 00:07:59.609 Device Self-Test: Not Supported 00:07:59.609 Directives: Supported 00:07:59.609 NVMe-MI: Not Supported 00:07:59.609 Virtualization Management: Not Supported 00:07:59.609 Doorbell Buffer Config: Supported 00:07:59.609 Get LBA Status Capability: Not Supported 00:07:59.609 Command & Feature Lockdown Capability: Not Supported 00:07:59.609 Abort Command Limit: 4 00:07:59.609 Async Event Request Limit: 4 00:07:59.609 Number of Firmware Slots: N/A 00:07:59.609 Firmware Slot 1 Read-Only: N/A 00:07:59.609 Firmware Activation Without Reset: N/A 00:07:59.609 Multiple Update Detection Support: N/A 00:07:59.609 Firmware Update Granularity: No Information Provided 00:07:59.609 Per-Namespace SMART Log: Yes 00:07:59.609 Asymmetric Namespace Access Log Page: Not Supported 00:07:59.609 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:59.609 Command Effects Log Page: Supported 00:07:59.609 Get Log Page Extended Data: Supported 00:07:59.609 Telemetry Log Pages: Not Supported 00:07:59.609 Persistent Event Log Pages: Not Supported 00:07:59.609 Supported Log Pages Log Page: May Support 00:07:59.609 Commands Supported & Effects Log Page: Not Supported 00:07:59.609 Feature Identifiers & Effects Log Page:May Support 00:07:59.609 NVMe-MI Commands & Effects Log Page: May Support 00:07:59.609 Data Area 4 for Telemetry Log: Not Supported 00:07:59.609 Error Log Page Entries Supported: 1 00:07:59.609 Keep Alive: Not Supported 00:07:59.609 00:07:59.609 NVM Command Set Attributes 00:07:59.609 ========================== 00:07:59.609 Submission Queue Entry Size 00:07:59.609 Max: 64 00:07:59.609 Min: 64 00:07:59.609 Completion Queue Entry Size 00:07:59.609 Max: 16 00:07:59.609 Min: 16 00:07:59.609 Number of Namespaces: 256 00:07:59.609 Compare Command: Supported 00:07:59.609 Write Uncorrectable Command: Not Supported 00:07:59.609 Dataset Management Command: Supported 00:07:59.609 Write Zeroes Command: Supported 00:07:59.609 Set Features Save Field: Supported 00:07:59.609 Reservations: Not Supported 00:07:59.609 Timestamp: Supported 00:07:59.609 Copy: Supported 00:07:59.609 Volatile Write Cache: Present 00:07:59.609 Atomic Write Unit (Normal): 1 00:07:59.609 Atomic Write Unit (PFail): 1 00:07:59.609 Atomic Compare & Write Unit: 1 00:07:59.609 Fused Compare & Write: Not Supported 00:07:59.609 Scatter-Gather List 00:07:59.609 SGL Command Set: Supported 00:07:59.609 SGL Keyed: Not Supported 00:07:59.609 SGL Bit Bucket Descriptor: Not Supported 00:07:59.609 SGL Metadata Pointer: Not Supported 00:07:59.609 Oversized SGL: Not Supported 00:07:59.609 SGL Metadata Address: Not Supported 00:07:59.609 SGL Offset: Not Supported 00:07:59.609 Transport SGL Data Block: Not Supported 00:07:59.609 Replay Protected Memory Block: Not Supported 00:07:59.609 00:07:59.609 Firmware Slot Information 00:07:59.609 ========================= 00:07:59.609 Active slot: 1 00:07:59.609 Slot 1 Firmware Revision: 1.0 00:07:59.609 00:07:59.609 00:07:59.609 Commands Supported and Effects 00:07:59.609 ============================== 00:07:59.609 Admin Commands 00:07:59.609 -------------- 00:07:59.609 Delete I/O Submission Queue (00h): Supported 00:07:59.609 Create I/O Submission Queue (01h): Supported 00:07:59.609 Get Log Page (02h): Supported 00:07:59.609 Delete I/O Completion Queue (04h): Supported 00:07:59.609 Create I/O Completion Queue (05h): Supported 00:07:59.609 Identify (06h): Supported 00:07:59.609 Abort (08h): Supported 00:07:59.609 Set Features (09h): Supported 00:07:59.609 Get Features (0Ah): Supported 00:07:59.609 Asynchronous Event Request (0Ch): Supported 00:07:59.609 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:59.609 Directive Send (19h): Supported 00:07:59.609 Directive Receive (1Ah): Supported 00:07:59.609 Virtualization Management (1Ch): Supported 00:07:59.609 Doorbell Buffer Config (7Ch): Supported 00:07:59.609 Format NVM (80h): Supported LBA-Change 00:07:59.609 I/O Commands 00:07:59.609 ------------ 00:07:59.609 Flush (00h): Supported LBA-Change 00:07:59.609 Write (01h): Supported LBA-Change 00:07:59.610 Read (02h): Supported 00:07:59.610 Compare (05h): Supported 00:07:59.610 Write Zeroes (08h): Supported LBA-Change 00:07:59.610 Dataset Management (09h): Supported LBA-Change 00:07:59.610 Unknown (0Ch): Supported 00:07:59.610 Unknown (12h): Supported 00:07:59.610 Copy (19h): Supported LBA-Change 00:07:59.610 Unknown (1Dh): Supported LBA-Change 00:07:59.610 00:07:59.610 Error Log 00:07:59.610 ========= 00:07:59.610 00:07:59.610 Arbitration 00:07:59.610 =========== 00:07:59.610 Arbitration Burst: no limit 00:07:59.610 00:07:59.610 Power Management 00:07:59.610 ================ 00:07:59.610 Number of Power States: 1 00:07:59.610 Current Power State: Power State #0 00:07:59.610 Power State #0: 00:07:59.610 Max Power: 25.00 W 00:07:59.610 Non-Operational State: Operational 00:07:59.610 Entry Latency: 16 microseconds 00:07:59.610 Exit Latency: 4 microseconds 00:07:59.610 Relative Read Throughput: 0 00:07:59.610 Relative Read Latency: 0 00:07:59.610 Relative Write Throughput: 0 00:07:59.610 Relative Write Latency: 0 00:07:59.610 Idle Power[2024-09-29 21:39:18.363801] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 63247 terminated unexpected 00:07:59.610 : Not Reported 00:07:59.610 Active Power: Not Reported 00:07:59.610 Non-Operational Permissive Mode: Not Supported 00:07:59.610 00:07:59.610 Health Information 00:07:59.610 ================== 00:07:59.610 Critical Warnings: 00:07:59.610 Available Spare Space: OK 00:07:59.610 Temperature: OK 00:07:59.610 Device Reliability: OK 00:07:59.610 Read Only: No 00:07:59.610 Volatile Memory Backup: OK 00:07:59.610 Current Temperature: 323 Kelvin (50 Celsius) 00:07:59.610 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:59.610 Available Spare: 0% 00:07:59.610 Available Spare Threshold: 0% 00:07:59.610 Life Percentage Used: 0% 00:07:59.610 Data Units Read: 723 00:07:59.610 Data Units Written: 651 00:07:59.610 Host Read Commands: 40585 00:07:59.610 Host Write Commands: 40371 00:07:59.610 Controller Busy Time: 0 minutes 00:07:59.610 Power Cycles: 0 00:07:59.610 Power On Hours: 0 hours 00:07:59.610 Unsafe Shutdowns: 0 00:07:59.610 Unrecoverable Media Errors: 0 00:07:59.610 Lifetime Error Log Entries: 0 00:07:59.610 Warning Temperature Time: 0 minutes 00:07:59.610 Critical Temperature Time: 0 minutes 00:07:59.610 00:07:59.610 Number of Queues 00:07:59.610 ================ 00:07:59.610 Number of I/O Submission Queues: 64 00:07:59.610 Number of I/O Completion Queues: 64 00:07:59.610 00:07:59.610 ZNS Specific Controller Data 00:07:59.610 ============================ 00:07:59.610 Zone Append Size Limit: 0 00:07:59.610 00:07:59.610 00:07:59.610 Active Namespaces 00:07:59.610 ================= 00:07:59.610 Namespace ID:1 00:07:59.610 Error Recovery Timeout: Unlimited 00:07:59.610 Command Set Identifier: NVM (00h) 00:07:59.610 Deallocate: Supported 00:07:59.610 Deallocated/Unwritten Error: Supported 00:07:59.610 Deallocated Read Value: All 0x00 00:07:59.610 Deallocate in Write Zeroes: Not Supported 00:07:59.610 Deallocated Guard Field: 0xFFFF 00:07:59.610 Flush: Supported 00:07:59.610 Reservation: Not Supported 00:07:59.610 Metadata Transferred as: Separate Metadata Buffer 00:07:59.610 Namespace Sharing Capabilities: Private 00:07:59.610 Size (in LBAs): 1548666 (5GiB) 00:07:59.610 Capacity (in LBAs): 1548666 (5GiB) 00:07:59.610 Utilization (in LBAs): 1548666 (5GiB) 00:07:59.610 Thin Provisioning: Not Supported 00:07:59.610 Per-NS Atomic Units: No 00:07:59.610 Maximum Single Source Range Length: 128 00:07:59.610 Maximum Copy Length: 128 00:07:59.610 Maximum Source Range Count: 128 00:07:59.610 NGUID/EUI64 Never Reused: No 00:07:59.610 Namespace Write Protected: No 00:07:59.610 Number of LBA Formats: 8 00:07:59.610 Current LBA Format: LBA Format #07 00:07:59.610 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:59.610 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:59.610 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:59.610 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:59.610 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:59.610 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:59.610 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:59.610 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:59.610 00:07:59.610 NVM Specific Namespace Data 00:07:59.610 =========================== 00:07:59.610 Logical Block Storage Tag Mask: 0 00:07:59.610 Protection Information Capabilities: 00:07:59.610 16b Guard Protection Information Storage Tag Support: No 00:07:59.610 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.610 Storage Tag Check Read Support: No 00:07:59.610 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.610 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.610 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.610 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.610 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.610 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.610 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.610 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.610 ===================================================== 00:07:59.610 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:59.610 ===================================================== 00:07:59.610 Controller Capabilities/Features 00:07:59.610 ================================ 00:07:59.610 Vendor ID: 1b36 00:07:59.610 Subsystem Vendor ID: 1af4 00:07:59.610 Serial Number: 12341 00:07:59.610 Model Number: QEMU NVMe Ctrl 00:07:59.610 Firmware Version: 8.0.0 00:07:59.610 Recommended Arb Burst: 6 00:07:59.610 IEEE OUI Identifier: 00 54 52 00:07:59.610 Multi-path I/O 00:07:59.610 May have multiple subsystem ports: No 00:07:59.610 May have multiple controllers: No 00:07:59.610 Associated with SR-IOV VF: No 00:07:59.610 Max Data Transfer Size: 524288 00:07:59.610 Max Number of Namespaces: 256 00:07:59.610 Max Number of I/O Queues: 64 00:07:59.610 NVMe Specification Version (VS): 1.4 00:07:59.610 NVMe Specification Version (Identify): 1.4 00:07:59.610 Maximum Queue Entries: 2048 00:07:59.610 Contiguous Queues Required: Yes 00:07:59.610 Arbitration Mechanisms Supported 00:07:59.610 Weighted Round Robin: Not Supported 00:07:59.610 Vendor Specific: Not Supported 00:07:59.610 Reset Timeout: 7500 ms 00:07:59.610 Doorbell Stride: 4 bytes 00:07:59.610 NVM Subsystem Reset: Not Supported 00:07:59.610 Command Sets Supported 00:07:59.610 NVM Command Set: Supported 00:07:59.610 Boot Partition: Not Supported 00:07:59.610 Memory Page Size Minimum: 4096 bytes 00:07:59.610 Memory Page Size Maximum: 65536 bytes 00:07:59.610 Persistent Memory Region: Not Supported 00:07:59.610 Optional Asynchronous Events Supported 00:07:59.610 Namespace Attribute Notices: Supported 00:07:59.610 Firmware Activation Notices: Not Supported 00:07:59.610 ANA Change Notices: Not Supported 00:07:59.610 PLE Aggregate Log Change Notices: Not Supported 00:07:59.611 LBA Status Info Alert Notices: Not Supported 00:07:59.611 EGE Aggregate Log Change Notices: Not Supported 00:07:59.611 Normal NVM Subsystem Shutdown event: Not Supported 00:07:59.611 Zone Descriptor Change Notices: Not Supported 00:07:59.611 Discovery Log Change Notices: Not Supported 00:07:59.611 Controller Attributes 00:07:59.611 128-bit Host Identifier: Not Supported 00:07:59.611 Non-Operational Permissive Mode: Not Supported 00:07:59.611 NVM Sets: Not Supported 00:07:59.611 Read Recovery Levels: Not Supported 00:07:59.611 Endurance Groups: Not Supported 00:07:59.611 Predictable Latency Mode: Not Supported 00:07:59.611 Traffic Based Keep ALive: Not Supported 00:07:59.611 Namespace Granularity: Not Supported 00:07:59.611 SQ Associations: Not Supported 00:07:59.611 UUID List: Not Supported 00:07:59.611 Multi-Domain Subsystem: Not Supported 00:07:59.611 Fixed Capacity Management: Not Supported 00:07:59.611 Variable Capacity Management: Not Supported 00:07:59.611 Delete Endurance Group: Not Supported 00:07:59.611 Delete NVM Set: Not Supported 00:07:59.611 Extended LBA Formats Supported: Supported 00:07:59.611 Flexible Data Placement Supported: Not Supported 00:07:59.611 00:07:59.611 Controller Memory Buffer Support 00:07:59.611 ================================ 00:07:59.611 Supported: No 00:07:59.611 00:07:59.611 Persistent Memory Region Support 00:07:59.611 ================================ 00:07:59.611 Supported: No 00:07:59.611 00:07:59.611 Admin Command Set Attributes 00:07:59.611 ============================ 00:07:59.611 Security Send/Receive: Not Supported 00:07:59.611 Format NVM: Supported 00:07:59.611 Firmware Activate/Download: Not Supported 00:07:59.611 Namespace Management: Supported 00:07:59.611 Device Self-Test: Not Supported 00:07:59.611 Directives: Supported 00:07:59.611 NVMe-MI: Not Supported 00:07:59.611 Virtualization Management: Not Supported 00:07:59.611 Doorbell Buffer Config: Supported 00:07:59.611 Get LBA Status Capability: Not Supported 00:07:59.611 Command & Feature Lockdown Capability: Not Supported 00:07:59.611 Abort Command Limit: 4 00:07:59.611 Async Event Request Limit: 4 00:07:59.611 Number of Firmware Slots: N/A 00:07:59.611 Firmware Slot 1 Read-Only: N/A 00:07:59.611 Firmware Activation Without Reset: N/A 00:07:59.611 Multiple Update Detection Support: N/A 00:07:59.611 Firmware Update Granularity: No Information Provided 00:07:59.611 Per-Namespace SMART Log: Yes 00:07:59.611 Asymmetric Namespace Access Log Page: Not Supported 00:07:59.611 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:59.611 Command Effects Log Page: Supported 00:07:59.611 Get Log Page Extended Data: Supported 00:07:59.611 Telemetry Log Pages: Not Supported 00:07:59.611 Persistent Event Log Pages: Not Supported 00:07:59.611 Supported Log Pages Log Page: May Support 00:07:59.611 Commands Supported & Effects Log Page: Not Supported 00:07:59.611 Feature Identifiers & Effects Log Page:May Support 00:07:59.611 NVMe-MI Commands & Effects Log Page: May Support 00:07:59.611 Data Area 4 for Telemetry Log: Not Supported 00:07:59.611 Error Log Page Entries Supported: 1 00:07:59.611 Keep Alive: Not Supported 00:07:59.611 00:07:59.611 NVM Command Set Attributes 00:07:59.611 ========================== 00:07:59.611 Submission Queue Entry Size 00:07:59.611 Max: 64 00:07:59.611 Min: 64 00:07:59.611 Completion Queue Entry Size 00:07:59.611 Max: 16 00:07:59.611 Min: 16 00:07:59.611 Number of Namespaces: 256 00:07:59.611 Compare Command: Supported 00:07:59.611 Write Uncorrectable Command: Not Supported 00:07:59.611 Dataset Management Command: Supported 00:07:59.611 Write Zeroes Command: Supported 00:07:59.611 Set Features Save Field: Supported 00:07:59.611 Reservations: Not Supported 00:07:59.611 Timestamp: Supported 00:07:59.611 Copy: Supported 00:07:59.611 Volatile Write Cache: Present 00:07:59.611 Atomic Write Unit (Normal): 1 00:07:59.611 Atomic Write Unit (PFail): 1 00:07:59.611 Atomic Compare & Write Unit: 1 00:07:59.611 Fused Compare & Write: Not Supported 00:07:59.611 Scatter-Gather List 00:07:59.611 SGL Command Set: Supported 00:07:59.611 SGL Keyed: Not Supported 00:07:59.611 SGL Bit Bucket Descriptor: Not Supported 00:07:59.611 SGL Metadata Pointer: Not Supported 00:07:59.611 Oversized SGL: Not Supported 00:07:59.611 SGL Metadata Address: Not Supported 00:07:59.611 SGL Offset: Not Supported 00:07:59.611 Transport SGL Data Block: Not Supported 00:07:59.611 Replay Protected Memory Block: Not Supported 00:07:59.611 00:07:59.611 Firmware Slot Information 00:07:59.611 ========================= 00:07:59.611 Active slot: 1 00:07:59.611 Slot 1 Firmware Revision: 1.0 00:07:59.611 00:07:59.611 00:07:59.611 Commands Supported and Effects 00:07:59.611 ============================== 00:07:59.611 Admin Commands 00:07:59.611 -------------- 00:07:59.611 Delete I/O Submission Queue (00h): Supported 00:07:59.611 Create I/O Submission Queue (01h): Supported 00:07:59.611 Get Log Page (02h): Supported 00:07:59.611 Delete I/O Completion Queue (04h): Supported 00:07:59.611 Create I/O Completion Queue (05h): Supported 00:07:59.611 Identify (06h): Supported 00:07:59.611 Abort (08h): Supported 00:07:59.611 Set Features (09h): Supported 00:07:59.611 Get Features (0Ah): Supported 00:07:59.611 Asynchronous Event Request (0Ch): Supported 00:07:59.611 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:59.611 Directive Send (19h): Supported 00:07:59.611 Directive Receive (1Ah): Supported 00:07:59.611 Virtualization Management (1Ch): Supported 00:07:59.611 Doorbell Buffer Config (7Ch): Supported 00:07:59.611 Format NVM (80h): Supported LBA-Change 00:07:59.611 I/O Commands 00:07:59.611 ------------ 00:07:59.611 Flush (00h): Supported LBA-Change 00:07:59.611 Write (01h): Supported LBA-Change 00:07:59.611 Read (02h): Supported 00:07:59.611 Compare (05h): Supported 00:07:59.611 Write Zeroes (08h): Supported LBA-Change 00:07:59.611 Dataset Management (09h): Supported LBA-Change 00:07:59.611 Unknown (0Ch): Supported 00:07:59.611 Unknown (12h): Supported 00:07:59.611 Copy (19h): Supported LBA-Change 00:07:59.611 Unknown (1Dh): Supported LBA-Change 00:07:59.611 00:07:59.611 Error Log 00:07:59.611 ========= 00:07:59.611 00:07:59.611 Arbitration 00:07:59.611 =========== 00:07:59.611 Arbitration Burst: no limit 00:07:59.611 00:07:59.611 Power Management 00:07:59.611 ================ 00:07:59.611 Number of Power States: 1 00:07:59.611 Current Power State: Power State #0 00:07:59.611 Power State #0: 00:07:59.611 Max Power: 25.00 W 00:07:59.611 Non-Operational State: Operational 00:07:59.611 Entry Latency: 16 microseconds 00:07:59.611 Exit Latency: 4 microseconds 00:07:59.611 Relative Read Throughput: 0 00:07:59.611 Relative Read Latency: 0 00:07:59.611 Relative Write Throughput: 0 00:07:59.611 Relative Write Latency: 0 00:07:59.611 Idle Power: Not Reported 00:07:59.611 Active Power: Not Reported 00:07:59.611 Non-Operational Permissive Mode: Not Supported 00:07:59.611 00:07:59.611 Health Information 00:07:59.611 ================== 00:07:59.611 Critical Warnings: 00:07:59.611 Available Spare Space: OK 00:07:59.611 Temperature: [2024-09-29 21:39:18.364589] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 63247 terminated unexpected 00:07:59.611 OK 00:07:59.611 Device Reliability: OK 00:07:59.611 Read Only: No 00:07:59.611 Volatile Memory Backup: OK 00:07:59.611 Current Temperature: 323 Kelvin (50 Celsius) 00:07:59.611 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:59.611 Available Spare: 0% 00:07:59.611 Available Spare Threshold: 0% 00:07:59.611 Life Percentage Used: 0% 00:07:59.611 Data Units Read: 1099 00:07:59.611 Data Units Written: 971 00:07:59.611 Host Read Commands: 58038 00:07:59.611 Host Write Commands: 56919 00:07:59.611 Controller Busy Time: 0 minutes 00:07:59.611 Power Cycles: 0 00:07:59.611 Power On Hours: 0 hours 00:07:59.611 Unsafe Shutdowns: 0 00:07:59.611 Unrecoverable Media Errors: 0 00:07:59.611 Lifetime Error Log Entries: 0 00:07:59.611 Warning Temperature Time: 0 minutes 00:07:59.611 Critical Temperature Time: 0 minutes 00:07:59.611 00:07:59.611 Number of Queues 00:07:59.612 ================ 00:07:59.612 Number of I/O Submission Queues: 64 00:07:59.612 Number of I/O Completion Queues: 64 00:07:59.612 00:07:59.612 ZNS Specific Controller Data 00:07:59.612 ============================ 00:07:59.612 Zone Append Size Limit: 0 00:07:59.612 00:07:59.612 00:07:59.612 Active Namespaces 00:07:59.612 ================= 00:07:59.612 Namespace ID:1 00:07:59.612 Error Recovery Timeout: Unlimited 00:07:59.612 Command Set Identifier: NVM (00h) 00:07:59.612 Deallocate: Supported 00:07:59.612 Deallocated/Unwritten Error: Supported 00:07:59.612 Deallocated Read Value: All 0x00 00:07:59.612 Deallocate in Write Zeroes: Not Supported 00:07:59.612 Deallocated Guard Field: 0xFFFF 00:07:59.612 Flush: Supported 00:07:59.612 Reservation: Not Supported 00:07:59.612 Namespace Sharing Capabilities: Private 00:07:59.612 Size (in LBAs): 1310720 (5GiB) 00:07:59.612 Capacity (in LBAs): 1310720 (5GiB) 00:07:59.612 Utilization (in LBAs): 1310720 (5GiB) 00:07:59.612 Thin Provisioning: Not Supported 00:07:59.612 Per-NS Atomic Units: No 00:07:59.612 Maximum Single Source Range Length: 128 00:07:59.612 Maximum Copy Length: 128 00:07:59.612 Maximum Source Range Count: 128 00:07:59.612 NGUID/EUI64 Never Reused: No 00:07:59.612 Namespace Write Protected: No 00:07:59.612 Number of LBA Formats: 8 00:07:59.612 Current LBA Format: LBA Format #04 00:07:59.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:59.612 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:59.612 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:59.612 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:59.612 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:59.612 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:59.612 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:59.612 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:59.612 00:07:59.612 NVM Specific Namespace Data 00:07:59.612 =========================== 00:07:59.612 Logical Block Storage Tag Mask: 0 00:07:59.612 Protection Information Capabilities: 00:07:59.612 16b Guard Protection Information Storage Tag Support: No 00:07:59.612 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.612 Storage Tag Check Read Support: No 00:07:59.612 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.612 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.612 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.612 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.612 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.612 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.612 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.612 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.612 ===================================================== 00:07:59.612 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:59.612 ===================================================== 00:07:59.612 Controller Capabilities/Features 00:07:59.612 ================================ 00:07:59.612 Vendor ID: 1b36 00:07:59.612 Subsystem Vendor ID: 1af4 00:07:59.612 Serial Number: 12343 00:07:59.612 Model Number: QEMU NVMe Ctrl 00:07:59.612 Firmware Version: 8.0.0 00:07:59.612 Recommended Arb Burst: 6 00:07:59.612 IEEE OUI Identifier: 00 54 52 00:07:59.612 Multi-path I/O 00:07:59.612 May have multiple subsystem ports: No 00:07:59.612 May have multiple controllers: Yes 00:07:59.612 Associated with SR-IOV VF: No 00:07:59.612 Max Data Transfer Size: 524288 00:07:59.612 Max Number of Namespaces: 256 00:07:59.612 Max Number of I/O Queues: 64 00:07:59.612 NVMe Specification Version (VS): 1.4 00:07:59.612 NVMe Specification Version (Identify): 1.4 00:07:59.612 Maximum Queue Entries: 2048 00:07:59.612 Contiguous Queues Required: Yes 00:07:59.612 Arbitration Mechanisms Supported 00:07:59.612 Weighted Round Robin: Not Supported 00:07:59.612 Vendor Specific: Not Supported 00:07:59.612 Reset Timeout: 7500 ms 00:07:59.612 Doorbell Stride: 4 bytes 00:07:59.612 NVM Subsystem Reset: Not Supported 00:07:59.612 Command Sets Supported 00:07:59.612 NVM Command Set: Supported 00:07:59.612 Boot Partition: Not Supported 00:07:59.612 Memory Page Size Minimum: 4096 bytes 00:07:59.612 Memory Page Size Maximum: 65536 bytes 00:07:59.612 Persistent Memory Region: Not Supported 00:07:59.612 Optional Asynchronous Events Supported 00:07:59.612 Namespace Attribute Notices: Supported 00:07:59.612 Firmware Activation Notices: Not Supported 00:07:59.612 ANA Change Notices: Not Supported 00:07:59.612 PLE Aggregate Log Change Notices: Not Supported 00:07:59.612 LBA Status Info Alert Notices: Not Supported 00:07:59.612 EGE Aggregate Log Change Notices: Not Supported 00:07:59.612 Normal NVM Subsystem Shutdown event: Not Supported 00:07:59.612 Zone Descriptor Change Notices: Not Supported 00:07:59.612 Discovery Log Change Notices: Not Supported 00:07:59.612 Controller Attributes 00:07:59.612 128-bit Host Identifier: Not Supported 00:07:59.612 Non-Operational Permissive Mode: Not Supported 00:07:59.612 NVM Sets: Not Supported 00:07:59.612 Read Recovery Levels: Not Supported 00:07:59.612 Endurance Groups: Supported 00:07:59.612 Predictable Latency Mode: Not Supported 00:07:59.612 Traffic Based Keep ALive: Not Supported 00:07:59.612 Namespace Granularity: Not Supported 00:07:59.612 SQ Associations: Not Supported 00:07:59.612 UUID List: Not Supported 00:07:59.612 Multi-Domain Subsystem: Not Supported 00:07:59.612 Fixed Capacity Management: Not Supported 00:07:59.612 Variable Capacity Management: Not Supported 00:07:59.612 Delete Endurance Group: Not Supported 00:07:59.612 Delete NVM Set: Not Supported 00:07:59.612 Extended LBA Formats Supported: Supported 00:07:59.612 Flexible Data Placement Supported: Supported 00:07:59.612 00:07:59.612 Controller Memory Buffer Support 00:07:59.612 ================================ 00:07:59.612 Supported: No 00:07:59.612 00:07:59.612 Persistent Memory Region Support 00:07:59.612 ================================ 00:07:59.612 Supported: No 00:07:59.612 00:07:59.612 Admin Command Set Attributes 00:07:59.612 ============================ 00:07:59.612 Security Send/Receive: Not Supported 00:07:59.612 Format NVM: Supported 00:07:59.612 Firmware Activate/Download: Not Supported 00:07:59.612 Namespace Management: Supported 00:07:59.612 Device Self-Test: Not Supported 00:07:59.612 Directives: Supported 00:07:59.612 NVMe-MI: Not Supported 00:07:59.612 Virtualization Management: Not Supported 00:07:59.612 Doorbell Buffer Config: Supported 00:07:59.612 Get LBA Status Capability: Not Supported 00:07:59.612 Command & Feature Lockdown Capability: Not Supported 00:07:59.612 Abort Command Limit: 4 00:07:59.612 Async Event Request Limit: 4 00:07:59.612 Number of Firmware Slots: N/A 00:07:59.613 Firmware Slot 1 Read-Only: N/A 00:07:59.613 Firmware Activation Without Reset: N/A 00:07:59.613 Multiple Update Detection Support: N/A 00:07:59.613 Firmware Update Granularity: No Information Provided 00:07:59.613 Per-Namespace SMART Log: Yes 00:07:59.613 Asymmetric Namespace Access Log Page: Not Supported 00:07:59.613 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:59.613 Command Effects Log Page: Supported 00:07:59.613 Get Log Page Extended Data: Supported 00:07:59.613 Telemetry Log Pages: Not Supported 00:07:59.613 Persistent Event Log Pages: Not Supported 00:07:59.613 Supported Log Pages Log Page: May Support 00:07:59.613 Commands Supported & Effects Log Page: Not Supported 00:07:59.613 Feature Identifiers & Effects Log Page:May Support 00:07:59.613 NVMe-MI Commands & Effects Log Page: May Support 00:07:59.613 Data Area 4 for Telemetry Log: Not Supported 00:07:59.613 Error Log Page Entries Supported: 1 00:07:59.613 Keep Alive: Not Supported 00:07:59.613 00:07:59.613 NVM Command Set Attributes 00:07:59.613 ========================== 00:07:59.613 Submission Queue Entry Size 00:07:59.613 Max: 64 00:07:59.613 Min: 64 00:07:59.613 Completion Queue Entry Size 00:07:59.613 Max: 16 00:07:59.613 Min: 16 00:07:59.613 Number of Namespaces: 256 00:07:59.613 Compare Command: Supported 00:07:59.613 Write Uncorrectable Command: Not Supported 00:07:59.613 Dataset Management Command: Supported 00:07:59.613 Write Zeroes Command: Supported 00:07:59.613 Set Features Save Field: Supported 00:07:59.613 Reservations: Not Supported 00:07:59.613 Timestamp: Supported 00:07:59.613 Copy: Supported 00:07:59.613 Volatile Write Cache: Present 00:07:59.613 Atomic Write Unit (Normal): 1 00:07:59.613 Atomic Write Unit (PFail): 1 00:07:59.613 Atomic Compare & Write Unit: 1 00:07:59.613 Fused Compare & Write: Not Supported 00:07:59.613 Scatter-Gather List 00:07:59.613 SGL Command Set: Supported 00:07:59.613 SGL Keyed: Not Supported 00:07:59.613 SGL Bit Bucket Descriptor: Not Supported 00:07:59.613 SGL Metadata Pointer: Not Supported 00:07:59.613 Oversized SGL: Not Supported 00:07:59.613 SGL Metadata Address: Not Supported 00:07:59.613 SGL Offset: Not Supported 00:07:59.613 Transport SGL Data Block: Not Supported 00:07:59.613 Replay Protected Memory Block: Not Supported 00:07:59.613 00:07:59.613 Firmware Slot Information 00:07:59.613 ========================= 00:07:59.613 Active slot: 1 00:07:59.613 Slot 1 Firmware Revision: 1.0 00:07:59.613 00:07:59.613 00:07:59.613 Commands Supported and Effects 00:07:59.613 ============================== 00:07:59.613 Admin Commands 00:07:59.613 -------------- 00:07:59.613 Delete I/O Submission Queue (00h): Supported 00:07:59.613 Create I/O Submission Queue (01h): Supported 00:07:59.613 Get Log Page (02h): Supported 00:07:59.613 Delete I/O Completion Queue (04h): Supported 00:07:59.613 Create I/O Completion Queue (05h): Supported 00:07:59.613 Identify (06h): Supported 00:07:59.613 Abort (08h): Supported 00:07:59.613 Set Features (09h): Supported 00:07:59.613 Get Features (0Ah): Supported 00:07:59.613 Asynchronous Event Request (0Ch): Supported 00:07:59.613 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:59.613 Directive Send (19h): Supported 00:07:59.613 Directive Receive (1Ah): Supported 00:07:59.613 Virtualization Management (1Ch): Supported 00:07:59.613 Doorbell Buffer Config (7Ch): Supported 00:07:59.613 Format NVM (80h): Supported LBA-Change 00:07:59.613 I/O Commands 00:07:59.613 ------------ 00:07:59.613 Flush (00h): Supported LBA-Change 00:07:59.613 Write (01h): Supported LBA-Change 00:07:59.613 Read (02h): Supported 00:07:59.613 Compare (05h): Supported 00:07:59.613 Write Zeroes (08h): Supported LBA-Change 00:07:59.613 Dataset Management (09h): Supported LBA-Change 00:07:59.613 Unknown (0Ch): Supported 00:07:59.613 Unknown (12h): Supported 00:07:59.613 Copy (19h): Supported LBA-Change 00:07:59.613 Unknown (1Dh): Supported LBA-Change 00:07:59.613 00:07:59.613 Error Log 00:07:59.613 ========= 00:07:59.613 00:07:59.613 Arbitration 00:07:59.613 =========== 00:07:59.613 Arbitration Burst: no limit 00:07:59.613 00:07:59.613 Power Management 00:07:59.613 ================ 00:07:59.613 Number of Power States: 1 00:07:59.613 Current Power State: Power State #0 00:07:59.613 Power State #0: 00:07:59.613 Max Power: 25.00 W 00:07:59.613 Non-Operational State: Operational 00:07:59.613 Entry Latency: 16 microseconds 00:07:59.613 Exit Latency: 4 microseconds 00:07:59.613 Relative Read Throughput: 0 00:07:59.613 Relative Read Latency: 0 00:07:59.613 Relative Write Throughput: 0 00:07:59.613 Relative Write Latency: 0 00:07:59.613 Idle Power: Not Reported 00:07:59.613 Active Power: Not Reported 00:07:59.613 Non-Operational Permissive Mode: Not Supported 00:07:59.613 00:07:59.613 Health Information 00:07:59.613 ================== 00:07:59.613 Critical Warnings: 00:07:59.613 Available Spare Space: OK 00:07:59.613 Temperature: OK 00:07:59.613 Device Reliability: OK 00:07:59.613 Read Only: No 00:07:59.613 Volatile Memory Backup: OK 00:07:59.613 Current Temperature: 323 Kelvin (50 Celsius) 00:07:59.613 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:59.613 Available Spare: 0% 00:07:59.613 Available Spare Threshold: 0% 00:07:59.613 Life Percentage Used: 0% 00:07:59.613 Data Units Read: 840 00:07:59.613 Data Units Written: 769 00:07:59.613 Host Read Commands: 41983 00:07:59.613 Host Write Commands: 41406 00:07:59.613 Controller Busy Time: 0 minutes 00:07:59.613 Power Cycles: 0 00:07:59.613 Power On Hours: 0 hours 00:07:59.613 Unsafe Shutdowns: 0 00:07:59.613 Unrecoverable Media Errors: 0 00:07:59.613 Lifetime Error Log Entries: 0 00:07:59.613 Warning Temperature Time: 0 minutes 00:07:59.613 Critical Temperature Time: 0 minutes 00:07:59.613 00:07:59.613 Number of Queues 00:07:59.613 ================ 00:07:59.613 Number of I/O Submission Queues: 64 00:07:59.613 Number of I/O Completion Queues: 64 00:07:59.613 00:07:59.613 ZNS Specific Controller Data 00:07:59.613 ============================ 00:07:59.613 Zone Append Size Limit: 0 00:07:59.613 00:07:59.613 00:07:59.613 Active Namespaces 00:07:59.613 ================= 00:07:59.613 Namespace ID:1 00:07:59.613 Error Recovery Timeout: Unlimited 00:07:59.613 Command Set Identifier: NVM (00h) 00:07:59.613 Deallocate: Supported 00:07:59.613 Deallocated/Unwritten Error: Supported 00:07:59.613 Deallocated Read Value: All 0x00 00:07:59.613 Deallocate in Write Zeroes: Not Supported 00:07:59.613 Deallocated Guard Field: 0xFFFF 00:07:59.613 Flush: Supported 00:07:59.613 Reservation: Not Supported 00:07:59.613 Namespace Sharing Capabilities: Multiple Controllers 00:07:59.613 Size (in LBAs): 262144 (1GiB) 00:07:59.613 Capacity (in LBAs): 262144 (1GiB) 00:07:59.613 Utilization (in LBAs): 262144 (1GiB) 00:07:59.613 Thin Provisioning: Not Supported 00:07:59.613 Per-NS Atomic Units: No 00:07:59.613 Maximum Single Source Range Length: 128 00:07:59.613 Maximum Copy Length: 128 00:07:59.613 Maximum Source Range Count: 128 00:07:59.613 NGUID/EUI64 Never Reused: No 00:07:59.613 Namespace Write Protected: No 00:07:59.613 Endurance group ID: 1 00:07:59.613 Number of LBA Formats: 8 00:07:59.613 Current LBA Format: LBA Format #04 00:07:59.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:59.613 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:59.613 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:59.613 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:59.613 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:59.613 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:59.613 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:59.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:59.613 00:07:59.613 Get Feature FDP: 00:07:59.613 ================ 00:07:59.614 Enabled: Yes 00:07:59.614 FDP configuration index: 0 00:07:59.614 00:07:59.614 FDP configurations log page 00:07:59.614 =========================== 00:07:59.614 Number of FDP configurations: 1 00:07:59.614 Version: 0 00:07:59.614 Size: 112 00:07:59.614 FDP Configuration Descriptor: 0 00:07:59.614 Descriptor Size: 96 00:07:59.614 Reclaim Group Identifier format: 2 00:07:59.614 FDP Volatile Write Cache: Not Present 00:07:59.614 FDP Configuration: Valid 00:07:59.614 Vendor Specific Size: 0 00:07:59.614 Number of Reclaim Groups: 2 00:07:59.614 Number of Recalim Unit Handles: 8 00:07:59.614 Max Placement Identifiers: 128 00:07:59.614 Number of Namespaces Suppprted: 256 00:07:59.614 Reclaim unit Nominal Size: 6000000 bytes 00:07:59.614 Estimated Reclaim Unit Time Limit: Not Reported 00:07:59.614 RUH Desc #000: RUH Type: Initially Isolated 00:07:59.614 RUH Desc #001: RUH Type: Initially Isolated 00:07:59.614 RUH Desc #002: RUH Type: Initially Isolated 00:07:59.614 RUH Desc #003: RUH Type: Initially Isolated 00:07:59.614 RUH Desc #004: RUH Type: Initially Isolated 00:07:59.614 RUH Desc #005: RUH Type: Initially Isolated 00:07:59.614 RUH Desc #006: RUH Type: Initially Isolated 00:07:59.614 RUH Desc #007: RUH Type: Initially Isolated 00:07:59.614 00:07:59.614 FDP reclaim unit handle usage log page 00:07:59.614 ====================================== 00:07:59.614 Number of Reclaim Unit Handles: 8 00:07:59.614 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:59.614 RUH Usage Desc #001: RUH Attributes: Unused 00:07:59.614 RUH Usage Desc #002: RUH Attributes: Unused 00:07:59.614 RUH Usage Desc #003: RUH Attributes: Unused 00:07:59.614 RUH Usage Desc #004: RUH Attributes: Unused 00:07:59.614 RUH Usage Desc #005: RUH Attributes: Unused 00:07:59.614 RUH Usage Desc #006: RUH Attributes: Unused 00:07:59.614 RUH Usage Desc #007: RUH Attributes: Unused 00:07:59.614 00:07:59.614 FDP statistics log page 00:07:59.614 ======================= 00:07:59.614 Host bytes with metadata written: 488349696 00:07:59.614 Medi[2024-09-29 21:39:18.365989] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 63247 terminated unexpected 00:07:59.614 a bytes with metadata written: 488394752 00:07:59.614 Media bytes erased: 0 00:07:59.614 00:07:59.614 FDP events log page 00:07:59.614 =================== 00:07:59.614 Number of FDP events: 0 00:07:59.614 00:07:59.614 NVM Specific Namespace Data 00:07:59.614 =========================== 00:07:59.614 Logical Block Storage Tag Mask: 0 00:07:59.614 Protection Information Capabilities: 00:07:59.614 16b Guard Protection Information Storage Tag Support: No 00:07:59.614 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.614 Storage Tag Check Read Support: No 00:07:59.614 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.614 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.614 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.614 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.614 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.614 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.614 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.614 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.614 ===================================================== 00:07:59.614 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:59.614 ===================================================== 00:07:59.614 Controller Capabilities/Features 00:07:59.614 ================================ 00:07:59.614 Vendor ID: 1b36 00:07:59.614 Subsystem Vendor ID: 1af4 00:07:59.614 Serial Number: 12342 00:07:59.614 Model Number: QEMU NVMe Ctrl 00:07:59.614 Firmware Version: 8.0.0 00:07:59.614 Recommended Arb Burst: 6 00:07:59.614 IEEE OUI Identifier: 00 54 52 00:07:59.614 Multi-path I/O 00:07:59.614 May have multiple subsystem ports: No 00:07:59.614 May have multiple controllers: No 00:07:59.614 Associated with SR-IOV VF: No 00:07:59.614 Max Data Transfer Size: 524288 00:07:59.614 Max Number of Namespaces: 256 00:07:59.614 Max Number of I/O Queues: 64 00:07:59.614 NVMe Specification Version (VS): 1.4 00:07:59.614 NVMe Specification Version (Identify): 1.4 00:07:59.614 Maximum Queue Entries: 2048 00:07:59.614 Contiguous Queues Required: Yes 00:07:59.614 Arbitration Mechanisms Supported 00:07:59.614 Weighted Round Robin: Not Supported 00:07:59.614 Vendor Specific: Not Supported 00:07:59.614 Reset Timeout: 7500 ms 00:07:59.614 Doorbell Stride: 4 bytes 00:07:59.614 NVM Subsystem Reset: Not Supported 00:07:59.614 Command Sets Supported 00:07:59.614 NVM Command Set: Supported 00:07:59.614 Boot Partition: Not Supported 00:07:59.614 Memory Page Size Minimum: 4096 bytes 00:07:59.614 Memory Page Size Maximum: 65536 bytes 00:07:59.614 Persistent Memory Region: Not Supported 00:07:59.614 Optional Asynchronous Events Supported 00:07:59.614 Namespace Attribute Notices: Supported 00:07:59.614 Firmware Activation Notices: Not Supported 00:07:59.614 ANA Change Notices: Not Supported 00:07:59.614 PLE Aggregate Log Change Notices: Not Supported 00:07:59.614 LBA Status Info Alert Notices: Not Supported 00:07:59.614 EGE Aggregate Log Change Notices: Not Supported 00:07:59.614 Normal NVM Subsystem Shutdown event: Not Supported 00:07:59.614 Zone Descriptor Change Notices: Not Supported 00:07:59.614 Discovery Log Change Notices: Not Supported 00:07:59.614 Controller Attributes 00:07:59.614 128-bit Host Identifier: Not Supported 00:07:59.614 Non-Operational Permissive Mode: Not Supported 00:07:59.614 NVM Sets: Not Supported 00:07:59.614 Read Recovery Levels: Not Supported 00:07:59.615 Endurance Groups: Not Supported 00:07:59.615 Predictable Latency Mode: Not Supported 00:07:59.615 Traffic Based Keep ALive: Not Supported 00:07:59.615 Namespace Granularity: Not Supported 00:07:59.615 SQ Associations: Not Supported 00:07:59.615 UUID List: Not Supported 00:07:59.615 Multi-Domain Subsystem: Not Supported 00:07:59.615 Fixed Capacity Management: Not Supported 00:07:59.615 Variable Capacity Management: Not Supported 00:07:59.615 Delete Endurance Group: Not Supported 00:07:59.615 Delete NVM Set: Not Supported 00:07:59.615 Extended LBA Formats Supported: Supported 00:07:59.615 Flexible Data Placement Supported: Not Supported 00:07:59.615 00:07:59.615 Controller Memory Buffer Support 00:07:59.615 ================================ 00:07:59.615 Supported: No 00:07:59.615 00:07:59.615 Persistent Memory Region Support 00:07:59.615 ================================ 00:07:59.615 Supported: No 00:07:59.615 00:07:59.615 Admin Command Set Attributes 00:07:59.615 ============================ 00:07:59.615 Security Send/Receive: Not Supported 00:07:59.615 Format NVM: Supported 00:07:59.615 Firmware Activate/Download: Not Supported 00:07:59.615 Namespace Management: Supported 00:07:59.615 Device Self-Test: Not Supported 00:07:59.615 Directives: Supported 00:07:59.615 NVMe-MI: Not Supported 00:07:59.615 Virtualization Management: Not Supported 00:07:59.615 Doorbell Buffer Config: Supported 00:07:59.615 Get LBA Status Capability: Not Supported 00:07:59.615 Command & Feature Lockdown Capability: Not Supported 00:07:59.615 Abort Command Limit: 4 00:07:59.615 Async Event Request Limit: 4 00:07:59.615 Number of Firmware Slots: N/A 00:07:59.615 Firmware Slot 1 Read-Only: N/A 00:07:59.615 Firmware Activation Without Reset: N/A 00:07:59.615 Multiple Update Detection Support: N/A 00:07:59.615 Firmware Update Granularity: No Information Provided 00:07:59.615 Per-Namespace SMART Log: Yes 00:07:59.615 Asymmetric Namespace Access Log Page: Not Supported 00:07:59.615 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:59.615 Command Effects Log Page: Supported 00:07:59.615 Get Log Page Extended Data: Supported 00:07:59.615 Telemetry Log Pages: Not Supported 00:07:59.615 Persistent Event Log Pages: Not Supported 00:07:59.615 Supported Log Pages Log Page: May Support 00:07:59.615 Commands Supported & Effects Log Page: Not Supported 00:07:59.615 Feature Identifiers & Effects Log Page:May Support 00:07:59.615 NVMe-MI Commands & Effects Log Page: May Support 00:07:59.615 Data Area 4 for Telemetry Log: Not Supported 00:07:59.615 Error Log Page Entries Supported: 1 00:07:59.615 Keep Alive: Not Supported 00:07:59.615 00:07:59.615 NVM Command Set Attributes 00:07:59.615 ========================== 00:07:59.615 Submission Queue Entry Size 00:07:59.615 Max: 64 00:07:59.615 Min: 64 00:07:59.615 Completion Queue Entry Size 00:07:59.615 Max: 16 00:07:59.615 Min: 16 00:07:59.615 Number of Namespaces: 256 00:07:59.615 Compare Command: Supported 00:07:59.615 Write Uncorrectable Command: Not Supported 00:07:59.615 Dataset Management Command: Supported 00:07:59.615 Write Zeroes Command: Supported 00:07:59.615 Set Features Save Field: Supported 00:07:59.615 Reservations: Not Supported 00:07:59.615 Timestamp: Supported 00:07:59.615 Copy: Supported 00:07:59.615 Volatile Write Cache: Present 00:07:59.615 Atomic Write Unit (Normal): 1 00:07:59.615 Atomic Write Unit (PFail): 1 00:07:59.615 Atomic Compare & Write Unit: 1 00:07:59.615 Fused Compare & Write: Not Supported 00:07:59.615 Scatter-Gather List 00:07:59.615 SGL Command Set: Supported 00:07:59.615 SGL Keyed: Not Supported 00:07:59.615 SGL Bit Bucket Descriptor: Not Supported 00:07:59.615 SGL Metadata Pointer: Not Supported 00:07:59.615 Oversized SGL: Not Supported 00:07:59.615 SGL Metadata Address: Not Supported 00:07:59.615 SGL Offset: Not Supported 00:07:59.615 Transport SGL Data Block: Not Supported 00:07:59.615 Replay Protected Memory Block: Not Supported 00:07:59.615 00:07:59.615 Firmware Slot Information 00:07:59.615 ========================= 00:07:59.615 Active slot: 1 00:07:59.615 Slot 1 Firmware Revision: 1.0 00:07:59.615 00:07:59.615 00:07:59.615 Commands Supported and Effects 00:07:59.615 ============================== 00:07:59.615 Admin Commands 00:07:59.615 -------------- 00:07:59.615 Delete I/O Submission Queue (00h): Supported 00:07:59.615 Create I/O Submission Queue (01h): Supported 00:07:59.615 Get Log Page (02h): Supported 00:07:59.615 Delete I/O Completion Queue (04h): Supported 00:07:59.615 Create I/O Completion Queue (05h): Supported 00:07:59.615 Identify (06h): Supported 00:07:59.615 Abort (08h): Supported 00:07:59.615 Set Features (09h): Supported 00:07:59.615 Get Features (0Ah): Supported 00:07:59.615 Asynchronous Event Request (0Ch): Supported 00:07:59.615 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:59.615 Directive Send (19h): Supported 00:07:59.615 Directive Receive (1Ah): Supported 00:07:59.615 Virtualization Management (1Ch): Supported 00:07:59.615 Doorbell Buffer Config (7Ch): Supported 00:07:59.615 Format NVM (80h): Supported LBA-Change 00:07:59.615 I/O Commands 00:07:59.615 ------------ 00:07:59.615 Flush (00h): Supported LBA-Change 00:07:59.615 Write (01h): Supported LBA-Change 00:07:59.615 Read (02h): Supported 00:07:59.615 Compare (05h): Supported 00:07:59.615 Write Zeroes (08h): Supported LBA-Change 00:07:59.615 Dataset Management (09h): Supported LBA-Change 00:07:59.615 Unknown (0Ch): Supported 00:07:59.615 Unknown (12h): Supported 00:07:59.615 Copy (19h): Supported LBA-Change 00:07:59.615 Unknown (1Dh): Supported LBA-Change 00:07:59.615 00:07:59.615 Error Log 00:07:59.615 ========= 00:07:59.615 00:07:59.615 Arbitration 00:07:59.615 =========== 00:07:59.615 Arbitration Burst: no limit 00:07:59.615 00:07:59.615 Power Management 00:07:59.615 ================ 00:07:59.615 Number of Power States: 1 00:07:59.615 Current Power State: Power State #0 00:07:59.615 Power State #0: 00:07:59.615 Max Power: 25.00 W 00:07:59.615 Non-Operational State: Operational 00:07:59.615 Entry Latency: 16 microseconds 00:07:59.615 Exit Latency: 4 microseconds 00:07:59.615 Relative Read Throughput: 0 00:07:59.615 Relative Read Latency: 0 00:07:59.615 Relative Write Throughput: 0 00:07:59.615 Relative Write Latency: 0 00:07:59.615 Idle Power: Not Reported 00:07:59.615 Active Power: Not Reported 00:07:59.615 Non-Operational Permissive Mode: Not Supported 00:07:59.615 00:07:59.615 Health Information 00:07:59.615 ================== 00:07:59.615 Critical Warnings: 00:07:59.615 Available Spare Space: OK 00:07:59.615 Temperature: OK 00:07:59.615 Device Reliability: OK 00:07:59.615 Read Only: No 00:07:59.615 Volatile Memory Backup: OK 00:07:59.615 Current Temperature: 323 Kelvin (50 Celsius) 00:07:59.615 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:59.615 Available Spare: 0% 00:07:59.615 Available Spare Threshold: 0% 00:07:59.615 Life Percentage Used: 0% 00:07:59.616 Data Units Read: 2253 00:07:59.616 Data Units Written: 2040 00:07:59.616 Host Read Commands: 123142 00:07:59.616 Host Write Commands: 121411 00:07:59.616 Controller Busy Time: 0 minutes 00:07:59.616 Power Cycles: 0 00:07:59.616 Power On Hours: 0 hours 00:07:59.616 Unsafe Shutdowns: 0 00:07:59.616 Unrecoverable Media Errors: 0 00:07:59.616 Lifetime Error Log Entries: 0 00:07:59.616 Warning Temperature Time: 0 minutes 00:07:59.616 Critical Temperature Time: 0 minutes 00:07:59.616 00:07:59.616 Number of Queues 00:07:59.616 ================ 00:07:59.616 Number of I/O Submission Queues: 64 00:07:59.616 Number of I/O Completion Queues: 64 00:07:59.616 00:07:59.616 ZNS Specific Controller Data 00:07:59.616 ============================ 00:07:59.616 Zone Append Size Limit: 0 00:07:59.616 00:07:59.616 00:07:59.616 Active Namespaces 00:07:59.616 ================= 00:07:59.616 Namespace ID:1 00:07:59.616 Error Recovery Timeout: Unlimited 00:07:59.616 Command Set Identifier: NVM (00h) 00:07:59.616 Deallocate: Supported 00:07:59.616 Deallocated/Unwritten Error: Supported 00:07:59.616 Deallocated Read Value: All 0x00 00:07:59.616 Deallocate in Write Zeroes: Not Supported 00:07:59.616 Deallocated Guard Field: 0xFFFF 00:07:59.616 Flush: Supported 00:07:59.616 Reservation: Not Supported 00:07:59.616 Namespace Sharing Capabilities: Private 00:07:59.616 Size (in LBAs): 1048576 (4GiB) 00:07:59.616 Capacity (in LBAs): 1048576 (4GiB) 00:07:59.616 Utilization (in LBAs): 1048576 (4GiB) 00:07:59.616 Thin Provisioning: Not Supported 00:07:59.616 Per-NS Atomic Units: No 00:07:59.616 Maximum Single Source Range Length: 128 00:07:59.616 Maximum Copy Length: 128 00:07:59.616 Maximum Source Range Count: 128 00:07:59.616 NGUID/EUI64 Never Reused: No 00:07:59.616 Namespace Write Protected: No 00:07:59.616 Number of LBA Formats: 8 00:07:59.616 Current LBA Format: LBA Format #04 00:07:59.616 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:59.616 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:59.616 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:59.616 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:59.616 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:59.616 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:59.616 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:59.616 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:59.616 00:07:59.616 NVM Specific Namespace Data 00:07:59.616 =========================== 00:07:59.616 Logical Block Storage Tag Mask: 0 00:07:59.616 Protection Information Capabilities: 00:07:59.616 16b Guard Protection Information Storage Tag Support: No 00:07:59.616 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.616 Storage Tag Check Read Support: No 00:07:59.616 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Namespace ID:2 00:07:59.616 Error Recovery Timeout: Unlimited 00:07:59.616 Command Set Identifier: NVM (00h) 00:07:59.616 Deallocate: Supported 00:07:59.616 Deallocated/Unwritten Error: Supported 00:07:59.616 Deallocated Read Value: All 0x00 00:07:59.616 Deallocate in Write Zeroes: Not Supported 00:07:59.616 Deallocated Guard Field: 0xFFFF 00:07:59.616 Flush: Supported 00:07:59.616 Reservation: Not Supported 00:07:59.616 Namespace Sharing Capabilities: Private 00:07:59.616 Size (in LBAs): 1048576 (4GiB) 00:07:59.616 Capacity (in LBAs): 1048576 (4GiB) 00:07:59.616 Utilization (in LBAs): 1048576 (4GiB) 00:07:59.616 Thin Provisioning: Not Supported 00:07:59.616 Per-NS Atomic Units: No 00:07:59.616 Maximum Single Source Range Length: 128 00:07:59.616 Maximum Copy Length: 128 00:07:59.616 Maximum Source Range Count: 128 00:07:59.616 NGUID/EUI64 Never Reused: No 00:07:59.616 Namespace Write Protected: No 00:07:59.616 Number of LBA Formats: 8 00:07:59.616 Current LBA Format: LBA Format #04 00:07:59.616 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:59.616 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:59.616 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:59.616 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:59.616 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:59.616 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:59.616 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:59.616 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:59.616 00:07:59.616 NVM Specific Namespace Data 00:07:59.616 =========================== 00:07:59.616 Logical Block Storage Tag Mask: 0 00:07:59.616 Protection Information Capabilities: 00:07:59.616 16b Guard Protection Information Storage Tag Support: No 00:07:59.616 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.616 Storage Tag Check Read Support: No 00:07:59.616 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.616 Namespace ID:3 00:07:59.616 Error Recovery Timeout: Unlimited 00:07:59.616 Command Set Identifier: NVM (00h) 00:07:59.616 Deallocate: Supported 00:07:59.616 Deallocated/Unwritten Error: Supported 00:07:59.616 Deallocated Read Value: All 0x00 00:07:59.616 Deallocate in Write Zeroes: Not Supported 00:07:59.616 Deallocated Guard Field: 0xFFFF 00:07:59.616 Flush: Supported 00:07:59.616 Reservation: Not Supported 00:07:59.616 Namespace Sharing Capabilities: Private 00:07:59.616 Size (in LBAs): 1048576 (4GiB) 00:07:59.616 Capacity (in LBAs): 1048576 (4GiB) 00:07:59.616 Utilization (in LBAs): 1048576 (4GiB) 00:07:59.616 Thin Provisioning: Not Supported 00:07:59.616 Per-NS Atomic Units: No 00:07:59.616 Maximum Single Source Range Length: 128 00:07:59.616 Maximum Copy Length: 128 00:07:59.616 Maximum Source Range Count: 128 00:07:59.616 NGUID/EUI64 Never Reused: No 00:07:59.616 Namespace Write Protected: No 00:07:59.617 Number of LBA Formats: 8 00:07:59.617 Current LBA Format: LBA Format #04 00:07:59.617 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:59.617 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:59.617 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:59.617 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:59.617 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:59.617 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:59.617 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:59.617 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:59.617 00:07:59.617 NVM Specific Namespace Data 00:07:59.617 =========================== 00:07:59.617 Logical Block Storage Tag Mask: 0 00:07:59.617 Protection Information Capabilities: 00:07:59.617 16b Guard Protection Information Storage Tag Support: No 00:07:59.617 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.617 Storage Tag Check Read Support: No 00:07:59.617 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.617 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.617 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.617 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.617 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.617 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.617 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.617 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.617 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:59.617 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:59.879 ===================================================== 00:07:59.879 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:59.879 ===================================================== 00:07:59.880 Controller Capabilities/Features 00:07:59.880 ================================ 00:07:59.880 Vendor ID: 1b36 00:07:59.880 Subsystem Vendor ID: 1af4 00:07:59.880 Serial Number: 12340 00:07:59.880 Model Number: QEMU NVMe Ctrl 00:07:59.880 Firmware Version: 8.0.0 00:07:59.880 Recommended Arb Burst: 6 00:07:59.880 IEEE OUI Identifier: 00 54 52 00:07:59.880 Multi-path I/O 00:07:59.880 May have multiple subsystem ports: No 00:07:59.880 May have multiple controllers: No 00:07:59.880 Associated with SR-IOV VF: No 00:07:59.880 Max Data Transfer Size: 524288 00:07:59.880 Max Number of Namespaces: 256 00:07:59.880 Max Number of I/O Queues: 64 00:07:59.880 NVMe Specification Version (VS): 1.4 00:07:59.880 NVMe Specification Version (Identify): 1.4 00:07:59.880 Maximum Queue Entries: 2048 00:07:59.880 Contiguous Queues Required: Yes 00:07:59.880 Arbitration Mechanisms Supported 00:07:59.880 Weighted Round Robin: Not Supported 00:07:59.880 Vendor Specific: Not Supported 00:07:59.880 Reset Timeout: 7500 ms 00:07:59.880 Doorbell Stride: 4 bytes 00:07:59.880 NVM Subsystem Reset: Not Supported 00:07:59.880 Command Sets Supported 00:07:59.880 NVM Command Set: Supported 00:07:59.880 Boot Partition: Not Supported 00:07:59.880 Memory Page Size Minimum: 4096 bytes 00:07:59.880 Memory Page Size Maximum: 65536 bytes 00:07:59.880 Persistent Memory Region: Not Supported 00:07:59.880 Optional Asynchronous Events Supported 00:07:59.880 Namespace Attribute Notices: Supported 00:07:59.880 Firmware Activation Notices: Not Supported 00:07:59.880 ANA Change Notices: Not Supported 00:07:59.880 PLE Aggregate Log Change Notices: Not Supported 00:07:59.880 LBA Status Info Alert Notices: Not Supported 00:07:59.880 EGE Aggregate Log Change Notices: Not Supported 00:07:59.880 Normal NVM Subsystem Shutdown event: Not Supported 00:07:59.880 Zone Descriptor Change Notices: Not Supported 00:07:59.880 Discovery Log Change Notices: Not Supported 00:07:59.880 Controller Attributes 00:07:59.880 128-bit Host Identifier: Not Supported 00:07:59.880 Non-Operational Permissive Mode: Not Supported 00:07:59.880 NVM Sets: Not Supported 00:07:59.880 Read Recovery Levels: Not Supported 00:07:59.880 Endurance Groups: Not Supported 00:07:59.880 Predictable Latency Mode: Not Supported 00:07:59.880 Traffic Based Keep ALive: Not Supported 00:07:59.880 Namespace Granularity: Not Supported 00:07:59.880 SQ Associations: Not Supported 00:07:59.880 UUID List: Not Supported 00:07:59.880 Multi-Domain Subsystem: Not Supported 00:07:59.880 Fixed Capacity Management: Not Supported 00:07:59.880 Variable Capacity Management: Not Supported 00:07:59.880 Delete Endurance Group: Not Supported 00:07:59.880 Delete NVM Set: Not Supported 00:07:59.880 Extended LBA Formats Supported: Supported 00:07:59.880 Flexible Data Placement Supported: Not Supported 00:07:59.880 00:07:59.880 Controller Memory Buffer Support 00:07:59.880 ================================ 00:07:59.880 Supported: No 00:07:59.880 00:07:59.880 Persistent Memory Region Support 00:07:59.880 ================================ 00:07:59.880 Supported: No 00:07:59.880 00:07:59.880 Admin Command Set Attributes 00:07:59.880 ============================ 00:07:59.880 Security Send/Receive: Not Supported 00:07:59.880 Format NVM: Supported 00:07:59.880 Firmware Activate/Download: Not Supported 00:07:59.880 Namespace Management: Supported 00:07:59.880 Device Self-Test: Not Supported 00:07:59.880 Directives: Supported 00:07:59.880 NVMe-MI: Not Supported 00:07:59.880 Virtualization Management: Not Supported 00:07:59.880 Doorbell Buffer Config: Supported 00:07:59.880 Get LBA Status Capability: Not Supported 00:07:59.880 Command & Feature Lockdown Capability: Not Supported 00:07:59.880 Abort Command Limit: 4 00:07:59.880 Async Event Request Limit: 4 00:07:59.880 Number of Firmware Slots: N/A 00:07:59.880 Firmware Slot 1 Read-Only: N/A 00:07:59.880 Firmware Activation Without Reset: N/A 00:07:59.880 Multiple Update Detection Support: N/A 00:07:59.880 Firmware Update Granularity: No Information Provided 00:07:59.880 Per-Namespace SMART Log: Yes 00:07:59.880 Asymmetric Namespace Access Log Page: Not Supported 00:07:59.880 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:59.880 Command Effects Log Page: Supported 00:07:59.880 Get Log Page Extended Data: Supported 00:07:59.880 Telemetry Log Pages: Not Supported 00:07:59.880 Persistent Event Log Pages: Not Supported 00:07:59.880 Supported Log Pages Log Page: May Support 00:07:59.880 Commands Supported & Effects Log Page: Not Supported 00:07:59.880 Feature Identifiers & Effects Log Page:May Support 00:07:59.880 NVMe-MI Commands & Effects Log Page: May Support 00:07:59.880 Data Area 4 for Telemetry Log: Not Supported 00:07:59.880 Error Log Page Entries Supported: 1 00:07:59.880 Keep Alive: Not Supported 00:07:59.880 00:07:59.880 NVM Command Set Attributes 00:07:59.880 ========================== 00:07:59.880 Submission Queue Entry Size 00:07:59.880 Max: 64 00:07:59.880 Min: 64 00:07:59.880 Completion Queue Entry Size 00:07:59.880 Max: 16 00:07:59.880 Min: 16 00:07:59.880 Number of Namespaces: 256 00:07:59.880 Compare Command: Supported 00:07:59.880 Write Uncorrectable Command: Not Supported 00:07:59.880 Dataset Management Command: Supported 00:07:59.880 Write Zeroes Command: Supported 00:07:59.880 Set Features Save Field: Supported 00:07:59.880 Reservations: Not Supported 00:07:59.880 Timestamp: Supported 00:07:59.880 Copy: Supported 00:07:59.880 Volatile Write Cache: Present 00:07:59.880 Atomic Write Unit (Normal): 1 00:07:59.880 Atomic Write Unit (PFail): 1 00:07:59.880 Atomic Compare & Write Unit: 1 00:07:59.880 Fused Compare & Write: Not Supported 00:07:59.880 Scatter-Gather List 00:07:59.880 SGL Command Set: Supported 00:07:59.880 SGL Keyed: Not Supported 00:07:59.880 SGL Bit Bucket Descriptor: Not Supported 00:07:59.880 SGL Metadata Pointer: Not Supported 00:07:59.880 Oversized SGL: Not Supported 00:07:59.880 SGL Metadata Address: Not Supported 00:07:59.880 SGL Offset: Not Supported 00:07:59.880 Transport SGL Data Block: Not Supported 00:07:59.880 Replay Protected Memory Block: Not Supported 00:07:59.880 00:07:59.880 Firmware Slot Information 00:07:59.880 ========================= 00:07:59.880 Active slot: 1 00:07:59.880 Slot 1 Firmware Revision: 1.0 00:07:59.880 00:07:59.880 00:07:59.880 Commands Supported and Effects 00:07:59.880 ============================== 00:07:59.880 Admin Commands 00:07:59.880 -------------- 00:07:59.880 Delete I/O Submission Queue (00h): Supported 00:07:59.880 Create I/O Submission Queue (01h): Supported 00:07:59.880 Get Log Page (02h): Supported 00:07:59.880 Delete I/O Completion Queue (04h): Supported 00:07:59.880 Create I/O Completion Queue (05h): Supported 00:07:59.880 Identify (06h): Supported 00:07:59.880 Abort (08h): Supported 00:07:59.880 Set Features (09h): Supported 00:07:59.880 Get Features (0Ah): Supported 00:07:59.880 Asynchronous Event Request (0Ch): Supported 00:07:59.880 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:59.880 Directive Send (19h): Supported 00:07:59.880 Directive Receive (1Ah): Supported 00:07:59.880 Virtualization Management (1Ch): Supported 00:07:59.880 Doorbell Buffer Config (7Ch): Supported 00:07:59.880 Format NVM (80h): Supported LBA-Change 00:07:59.881 I/O Commands 00:07:59.881 ------------ 00:07:59.881 Flush (00h): Supported LBA-Change 00:07:59.881 Write (01h): Supported LBA-Change 00:07:59.881 Read (02h): Supported 00:07:59.881 Compare (05h): Supported 00:07:59.881 Write Zeroes (08h): Supported LBA-Change 00:07:59.881 Dataset Management (09h): Supported LBA-Change 00:07:59.881 Unknown (0Ch): Supported 00:07:59.881 Unknown (12h): Supported 00:07:59.881 Copy (19h): Supported LBA-Change 00:07:59.881 Unknown (1Dh): Supported LBA-Change 00:07:59.881 00:07:59.881 Error Log 00:07:59.881 ========= 00:07:59.881 00:07:59.881 Arbitration 00:07:59.881 =========== 00:07:59.881 Arbitration Burst: no limit 00:07:59.881 00:07:59.881 Power Management 00:07:59.881 ================ 00:07:59.881 Number of Power States: 1 00:07:59.881 Current Power State: Power State #0 00:07:59.881 Power State #0: 00:07:59.881 Max Power: 25.00 W 00:07:59.881 Non-Operational State: Operational 00:07:59.881 Entry Latency: 16 microseconds 00:07:59.881 Exit Latency: 4 microseconds 00:07:59.881 Relative Read Throughput: 0 00:07:59.881 Relative Read Latency: 0 00:07:59.881 Relative Write Throughput: 0 00:07:59.881 Relative Write Latency: 0 00:07:59.881 Idle Power: Not Reported 00:07:59.881 Active Power: Not Reported 00:07:59.881 Non-Operational Permissive Mode: Not Supported 00:07:59.881 00:07:59.881 Health Information 00:07:59.881 ================== 00:07:59.881 Critical Warnings: 00:07:59.881 Available Spare Space: OK 00:07:59.881 Temperature: OK 00:07:59.881 Device Reliability: OK 00:07:59.881 Read Only: No 00:07:59.881 Volatile Memory Backup: OK 00:07:59.881 Current Temperature: 323 Kelvin (50 Celsius) 00:07:59.881 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:59.881 Available Spare: 0% 00:07:59.881 Available Spare Threshold: 0% 00:07:59.881 Life Percentage Used: 0% 00:07:59.881 Data Units Read: 723 00:07:59.881 Data Units Written: 651 00:07:59.881 Host Read Commands: 40585 00:07:59.881 Host Write Commands: 40371 00:07:59.881 Controller Busy Time: 0 minutes 00:07:59.881 Power Cycles: 0 00:07:59.881 Power On Hours: 0 hours 00:07:59.881 Unsafe Shutdowns: 0 00:07:59.881 Unrecoverable Media Errors: 0 00:07:59.881 Lifetime Error Log Entries: 0 00:07:59.881 Warning Temperature Time: 0 minutes 00:07:59.881 Critical Temperature Time: 0 minutes 00:07:59.881 00:07:59.881 Number of Queues 00:07:59.881 ================ 00:07:59.881 Number of I/O Submission Queues: 64 00:07:59.881 Number of I/O Completion Queues: 64 00:07:59.881 00:07:59.881 ZNS Specific Controller Data 00:07:59.881 ============================ 00:07:59.881 Zone Append Size Limit: 0 00:07:59.881 00:07:59.881 00:07:59.881 Active Namespaces 00:07:59.881 ================= 00:07:59.881 Namespace ID:1 00:07:59.881 Error Recovery Timeout: Unlimited 00:07:59.881 Command Set Identifier: NVM (00h) 00:07:59.881 Deallocate: Supported 00:07:59.881 Deallocated/Unwritten Error: Supported 00:07:59.881 Deallocated Read Value: All 0x00 00:07:59.881 Deallocate in Write Zeroes: Not Supported 00:07:59.881 Deallocated Guard Field: 0xFFFF 00:07:59.881 Flush: Supported 00:07:59.881 Reservation: Not Supported 00:07:59.881 Metadata Transferred as: Separate Metadata Buffer 00:07:59.881 Namespace Sharing Capabilities: Private 00:07:59.881 Size (in LBAs): 1548666 (5GiB) 00:07:59.881 Capacity (in LBAs): 1548666 (5GiB) 00:07:59.881 Utilization (in LBAs): 1548666 (5GiB) 00:07:59.881 Thin Provisioning: Not Supported 00:07:59.881 Per-NS Atomic Units: No 00:07:59.881 Maximum Single Source Range Length: 128 00:07:59.881 Maximum Copy Length: 128 00:07:59.881 Maximum Source Range Count: 128 00:07:59.881 NGUID/EUI64 Never Reused: No 00:07:59.881 Namespace Write Protected: No 00:07:59.881 Number of LBA Formats: 8 00:07:59.881 Current LBA Format: LBA Format #07 00:07:59.881 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:59.881 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:59.881 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:59.881 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:59.881 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:59.881 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:59.881 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:59.881 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:59.881 00:07:59.881 NVM Specific Namespace Data 00:07:59.881 =========================== 00:07:59.881 Logical Block Storage Tag Mask: 0 00:07:59.881 Protection Information Capabilities: 00:07:59.881 16b Guard Protection Information Storage Tag Support: No 00:07:59.881 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.881 Storage Tag Check Read Support: No 00:07:59.881 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.881 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.881 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.881 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.881 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.881 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.881 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.881 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.881 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:59.881 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:59.881 ===================================================== 00:07:59.881 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:59.881 ===================================================== 00:07:59.881 Controller Capabilities/Features 00:07:59.881 ================================ 00:07:59.881 Vendor ID: 1b36 00:07:59.881 Subsystem Vendor ID: 1af4 00:07:59.881 Serial Number: 12341 00:07:59.881 Model Number: QEMU NVMe Ctrl 00:07:59.881 Firmware Version: 8.0.0 00:07:59.881 Recommended Arb Burst: 6 00:07:59.881 IEEE OUI Identifier: 00 54 52 00:07:59.881 Multi-path I/O 00:07:59.881 May have multiple subsystem ports: No 00:07:59.881 May have multiple controllers: No 00:07:59.881 Associated with SR-IOV VF: No 00:07:59.881 Max Data Transfer Size: 524288 00:07:59.881 Max Number of Namespaces: 256 00:07:59.881 Max Number of I/O Queues: 64 00:07:59.881 NVMe Specification Version (VS): 1.4 00:07:59.881 NVMe Specification Version (Identify): 1.4 00:07:59.881 Maximum Queue Entries: 2048 00:07:59.881 Contiguous Queues Required: Yes 00:07:59.881 Arbitration Mechanisms Supported 00:07:59.881 Weighted Round Robin: Not Supported 00:07:59.881 Vendor Specific: Not Supported 00:07:59.881 Reset Timeout: 7500 ms 00:07:59.881 Doorbell Stride: 4 bytes 00:07:59.881 NVM Subsystem Reset: Not Supported 00:07:59.882 Command Sets Supported 00:07:59.882 NVM Command Set: Supported 00:07:59.882 Boot Partition: Not Supported 00:07:59.882 Memory Page Size Minimum: 4096 bytes 00:07:59.882 Memory Page Size Maximum: 65536 bytes 00:07:59.882 Persistent Memory Region: Not Supported 00:07:59.882 Optional Asynchronous Events Supported 00:07:59.882 Namespace Attribute Notices: Supported 00:07:59.882 Firmware Activation Notices: Not Supported 00:07:59.882 ANA Change Notices: Not Supported 00:07:59.882 PLE Aggregate Log Change Notices: Not Supported 00:07:59.882 LBA Status Info Alert Notices: Not Supported 00:07:59.882 EGE Aggregate Log Change Notices: Not Supported 00:07:59.882 Normal NVM Subsystem Shutdown event: Not Supported 00:07:59.882 Zone Descriptor Change Notices: Not Supported 00:07:59.882 Discovery Log Change Notices: Not Supported 00:07:59.882 Controller Attributes 00:07:59.882 128-bit Host Identifier: Not Supported 00:07:59.882 Non-Operational Permissive Mode: Not Supported 00:07:59.882 NVM Sets: Not Supported 00:07:59.882 Read Recovery Levels: Not Supported 00:07:59.882 Endurance Groups: Not Supported 00:07:59.882 Predictable Latency Mode: Not Supported 00:07:59.882 Traffic Based Keep ALive: Not Supported 00:07:59.882 Namespace Granularity: Not Supported 00:07:59.882 SQ Associations: Not Supported 00:07:59.882 UUID List: Not Supported 00:07:59.882 Multi-Domain Subsystem: Not Supported 00:07:59.882 Fixed Capacity Management: Not Supported 00:07:59.882 Variable Capacity Management: Not Supported 00:07:59.882 Delete Endurance Group: Not Supported 00:07:59.882 Delete NVM Set: Not Supported 00:07:59.882 Extended LBA Formats Supported: Supported 00:07:59.882 Flexible Data Placement Supported: Not Supported 00:07:59.882 00:07:59.882 Controller Memory Buffer Support 00:07:59.882 ================================ 00:07:59.882 Supported: No 00:07:59.882 00:07:59.882 Persistent Memory Region Support 00:07:59.882 ================================ 00:07:59.882 Supported: No 00:07:59.882 00:07:59.882 Admin Command Set Attributes 00:07:59.882 ============================ 00:07:59.882 Security Send/Receive: Not Supported 00:07:59.882 Format NVM: Supported 00:07:59.882 Firmware Activate/Download: Not Supported 00:07:59.882 Namespace Management: Supported 00:07:59.882 Device Self-Test: Not Supported 00:07:59.882 Directives: Supported 00:07:59.882 NVMe-MI: Not Supported 00:07:59.882 Virtualization Management: Not Supported 00:07:59.882 Doorbell Buffer Config: Supported 00:07:59.882 Get LBA Status Capability: Not Supported 00:07:59.882 Command & Feature Lockdown Capability: Not Supported 00:07:59.882 Abort Command Limit: 4 00:07:59.882 Async Event Request Limit: 4 00:07:59.882 Number of Firmware Slots: N/A 00:07:59.882 Firmware Slot 1 Read-Only: N/A 00:07:59.882 Firmware Activation Without Reset: N/A 00:07:59.882 Multiple Update Detection Support: N/A 00:07:59.882 Firmware Update Granularity: No Information Provided 00:07:59.882 Per-Namespace SMART Log: Yes 00:07:59.882 Asymmetric Namespace Access Log Page: Not Supported 00:07:59.882 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:59.882 Command Effects Log Page: Supported 00:07:59.882 Get Log Page Extended Data: Supported 00:07:59.882 Telemetry Log Pages: Not Supported 00:07:59.882 Persistent Event Log Pages: Not Supported 00:07:59.882 Supported Log Pages Log Page: May Support 00:07:59.882 Commands Supported & Effects Log Page: Not Supported 00:07:59.882 Feature Identifiers & Effects Log Page:May Support 00:07:59.882 NVMe-MI Commands & Effects Log Page: May Support 00:07:59.882 Data Area 4 for Telemetry Log: Not Supported 00:07:59.882 Error Log Page Entries Supported: 1 00:07:59.882 Keep Alive: Not Supported 00:07:59.882 00:07:59.882 NVM Command Set Attributes 00:07:59.882 ========================== 00:07:59.882 Submission Queue Entry Size 00:07:59.882 Max: 64 00:07:59.882 Min: 64 00:07:59.882 Completion Queue Entry Size 00:07:59.882 Max: 16 00:07:59.882 Min: 16 00:07:59.882 Number of Namespaces: 256 00:07:59.882 Compare Command: Supported 00:07:59.882 Write Uncorrectable Command: Not Supported 00:07:59.882 Dataset Management Command: Supported 00:07:59.882 Write Zeroes Command: Supported 00:07:59.882 Set Features Save Field: Supported 00:07:59.882 Reservations: Not Supported 00:07:59.882 Timestamp: Supported 00:07:59.882 Copy: Supported 00:07:59.882 Volatile Write Cache: Present 00:07:59.882 Atomic Write Unit (Normal): 1 00:07:59.882 Atomic Write Unit (PFail): 1 00:07:59.882 Atomic Compare & Write Unit: 1 00:07:59.882 Fused Compare & Write: Not Supported 00:07:59.882 Scatter-Gather List 00:07:59.882 SGL Command Set: Supported 00:07:59.882 SGL Keyed: Not Supported 00:07:59.882 SGL Bit Bucket Descriptor: Not Supported 00:07:59.882 SGL Metadata Pointer: Not Supported 00:07:59.882 Oversized SGL: Not Supported 00:07:59.882 SGL Metadata Address: Not Supported 00:07:59.882 SGL Offset: Not Supported 00:07:59.882 Transport SGL Data Block: Not Supported 00:07:59.882 Replay Protected Memory Block: Not Supported 00:07:59.882 00:07:59.882 Firmware Slot Information 00:07:59.882 ========================= 00:07:59.882 Active slot: 1 00:07:59.882 Slot 1 Firmware Revision: 1.0 00:07:59.882 00:07:59.882 00:07:59.882 Commands Supported and Effects 00:07:59.882 ============================== 00:07:59.882 Admin Commands 00:07:59.882 -------------- 00:07:59.882 Delete I/O Submission Queue (00h): Supported 00:07:59.882 Create I/O Submission Queue (01h): Supported 00:07:59.882 Get Log Page (02h): Supported 00:07:59.882 Delete I/O Completion Queue (04h): Supported 00:07:59.882 Create I/O Completion Queue (05h): Supported 00:07:59.882 Identify (06h): Supported 00:07:59.882 Abort (08h): Supported 00:07:59.882 Set Features (09h): Supported 00:07:59.882 Get Features (0Ah): Supported 00:07:59.882 Asynchronous Event Request (0Ch): Supported 00:07:59.882 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:59.882 Directive Send (19h): Supported 00:07:59.882 Directive Receive (1Ah): Supported 00:07:59.882 Virtualization Management (1Ch): Supported 00:07:59.882 Doorbell Buffer Config (7Ch): Supported 00:07:59.882 Format NVM (80h): Supported LBA-Change 00:07:59.882 I/O Commands 00:07:59.882 ------------ 00:07:59.882 Flush (00h): Supported LBA-Change 00:07:59.882 Write (01h): Supported LBA-Change 00:07:59.882 Read (02h): Supported 00:07:59.882 Compare (05h): Supported 00:07:59.883 Write Zeroes (08h): Supported LBA-Change 00:07:59.883 Dataset Management (09h): Supported LBA-Change 00:07:59.883 Unknown (0Ch): Supported 00:07:59.883 Unknown (12h): Supported 00:07:59.883 Copy (19h): Supported LBA-Change 00:07:59.883 Unknown (1Dh): Supported LBA-Change 00:07:59.883 00:07:59.883 Error Log 00:07:59.883 ========= 00:07:59.883 00:07:59.883 Arbitration 00:07:59.883 =========== 00:07:59.883 Arbitration Burst: no limit 00:07:59.883 00:07:59.883 Power Management 00:07:59.883 ================ 00:07:59.883 Number of Power States: 1 00:07:59.883 Current Power State: Power State #0 00:07:59.883 Power State #0: 00:07:59.883 Max Power: 25.00 W 00:07:59.883 Non-Operational State: Operational 00:07:59.883 Entry Latency: 16 microseconds 00:07:59.883 Exit Latency: 4 microseconds 00:07:59.883 Relative Read Throughput: 0 00:07:59.883 Relative Read Latency: 0 00:07:59.883 Relative Write Throughput: 0 00:07:59.883 Relative Write Latency: 0 00:07:59.883 Idle Power: Not Reported 00:07:59.883 Active Power: Not Reported 00:07:59.883 Non-Operational Permissive Mode: Not Supported 00:07:59.883 00:07:59.883 Health Information 00:07:59.883 ================== 00:07:59.883 Critical Warnings: 00:07:59.883 Available Spare Space: OK 00:07:59.883 Temperature: OK 00:07:59.883 Device Reliability: OK 00:07:59.883 Read Only: No 00:07:59.883 Volatile Memory Backup: OK 00:07:59.883 Current Temperature: 323 Kelvin (50 Celsius) 00:07:59.883 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:59.883 Available Spare: 0% 00:07:59.883 Available Spare Threshold: 0% 00:07:59.883 Life Percentage Used: 0% 00:07:59.883 Data Units Read: 1099 00:07:59.883 Data Units Written: 971 00:07:59.883 Host Read Commands: 58038 00:07:59.883 Host Write Commands: 56919 00:07:59.883 Controller Busy Time: 0 minutes 00:07:59.883 Power Cycles: 0 00:07:59.883 Power On Hours: 0 hours 00:07:59.883 Unsafe Shutdowns: 0 00:07:59.883 Unrecoverable Media Errors: 0 00:07:59.883 Lifetime Error Log Entries: 0 00:07:59.883 Warning Temperature Time: 0 minutes 00:07:59.883 Critical Temperature Time: 0 minutes 00:07:59.883 00:07:59.883 Number of Queues 00:07:59.883 ================ 00:07:59.883 Number of I/O Submission Queues: 64 00:07:59.883 Number of I/O Completion Queues: 64 00:07:59.883 00:07:59.883 ZNS Specific Controller Data 00:07:59.883 ============================ 00:07:59.883 Zone Append Size Limit: 0 00:07:59.883 00:07:59.883 00:07:59.883 Active Namespaces 00:07:59.883 ================= 00:07:59.883 Namespace ID:1 00:07:59.883 Error Recovery Timeout: Unlimited 00:07:59.883 Command Set Identifier: NVM (00h) 00:07:59.883 Deallocate: Supported 00:07:59.883 Deallocated/Unwritten Error: Supported 00:07:59.883 Deallocated Read Value: All 0x00 00:07:59.883 Deallocate in Write Zeroes: Not Supported 00:07:59.883 Deallocated Guard Field: 0xFFFF 00:07:59.883 Flush: Supported 00:07:59.883 Reservation: Not Supported 00:07:59.883 Namespace Sharing Capabilities: Private 00:07:59.883 Size (in LBAs): 1310720 (5GiB) 00:07:59.883 Capacity (in LBAs): 1310720 (5GiB) 00:07:59.883 Utilization (in LBAs): 1310720 (5GiB) 00:07:59.883 Thin Provisioning: Not Supported 00:07:59.883 Per-NS Atomic Units: No 00:07:59.883 Maximum Single Source Range Length: 128 00:07:59.883 Maximum Copy Length: 128 00:07:59.883 Maximum Source Range Count: 128 00:07:59.883 NGUID/EUI64 Never Reused: No 00:07:59.883 Namespace Write Protected: No 00:07:59.883 Number of LBA Formats: 8 00:07:59.883 Current LBA Format: LBA Format #04 00:07:59.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:59.883 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:59.883 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:59.883 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:59.883 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:59.883 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:59.883 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:59.883 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:59.883 00:07:59.883 NVM Specific Namespace Data 00:07:59.883 =========================== 00:07:59.883 Logical Block Storage Tag Mask: 0 00:07:59.883 Protection Information Capabilities: 00:07:59.883 16b Guard Protection Information Storage Tag Support: No 00:07:59.883 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.883 Storage Tag Check Read Support: No 00:07:59.883 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.883 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.883 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.883 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.883 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.883 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.883 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.883 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.883 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:59.883 21:39:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:00.146 ===================================================== 00:08:00.146 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.146 ===================================================== 00:08:00.146 Controller Capabilities/Features 00:08:00.146 ================================ 00:08:00.146 Vendor ID: 1b36 00:08:00.146 Subsystem Vendor ID: 1af4 00:08:00.146 Serial Number: 12342 00:08:00.146 Model Number: QEMU NVMe Ctrl 00:08:00.146 Firmware Version: 8.0.0 00:08:00.146 Recommended Arb Burst: 6 00:08:00.146 IEEE OUI Identifier: 00 54 52 00:08:00.146 Multi-path I/O 00:08:00.146 May have multiple subsystem ports: No 00:08:00.146 May have multiple controllers: No 00:08:00.146 Associated with SR-IOV VF: No 00:08:00.146 Max Data Transfer Size: 524288 00:08:00.146 Max Number of Namespaces: 256 00:08:00.146 Max Number of I/O Queues: 64 00:08:00.146 NVMe Specification Version (VS): 1.4 00:08:00.146 NVMe Specification Version (Identify): 1.4 00:08:00.146 Maximum Queue Entries: 2048 00:08:00.146 Contiguous Queues Required: Yes 00:08:00.146 Arbitration Mechanisms Supported 00:08:00.146 Weighted Round Robin: Not Supported 00:08:00.146 Vendor Specific: Not Supported 00:08:00.146 Reset Timeout: 7500 ms 00:08:00.146 Doorbell Stride: 4 bytes 00:08:00.146 NVM Subsystem Reset: Not Supported 00:08:00.146 Command Sets Supported 00:08:00.146 NVM Command Set: Supported 00:08:00.146 Boot Partition: Not Supported 00:08:00.146 Memory Page Size Minimum: 4096 bytes 00:08:00.146 Memory Page Size Maximum: 65536 bytes 00:08:00.146 Persistent Memory Region: Not Supported 00:08:00.146 Optional Asynchronous Events Supported 00:08:00.146 Namespace Attribute Notices: Supported 00:08:00.146 Firmware Activation Notices: Not Supported 00:08:00.146 ANA Change Notices: Not Supported 00:08:00.146 PLE Aggregate Log Change Notices: Not Supported 00:08:00.146 LBA Status Info Alert Notices: Not Supported 00:08:00.146 EGE Aggregate Log Change Notices: Not Supported 00:08:00.146 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.146 Zone Descriptor Change Notices: Not Supported 00:08:00.146 Discovery Log Change Notices: Not Supported 00:08:00.146 Controller Attributes 00:08:00.146 128-bit Host Identifier: Not Supported 00:08:00.146 Non-Operational Permissive Mode: Not Supported 00:08:00.146 NVM Sets: Not Supported 00:08:00.146 Read Recovery Levels: Not Supported 00:08:00.146 Endurance Groups: Not Supported 00:08:00.146 Predictable Latency Mode: Not Supported 00:08:00.146 Traffic Based Keep ALive: Not Supported 00:08:00.146 Namespace Granularity: Not Supported 00:08:00.146 SQ Associations: Not Supported 00:08:00.146 UUID List: Not Supported 00:08:00.146 Multi-Domain Subsystem: Not Supported 00:08:00.146 Fixed Capacity Management: Not Supported 00:08:00.146 Variable Capacity Management: Not Supported 00:08:00.146 Delete Endurance Group: Not Supported 00:08:00.146 Delete NVM Set: Not Supported 00:08:00.146 Extended LBA Formats Supported: Supported 00:08:00.146 Flexible Data Placement Supported: Not Supported 00:08:00.146 00:08:00.146 Controller Memory Buffer Support 00:08:00.146 ================================ 00:08:00.146 Supported: No 00:08:00.146 00:08:00.146 Persistent Memory Region Support 00:08:00.146 ================================ 00:08:00.146 Supported: No 00:08:00.146 00:08:00.146 Admin Command Set Attributes 00:08:00.146 ============================ 00:08:00.146 Security Send/Receive: Not Supported 00:08:00.146 Format NVM: Supported 00:08:00.146 Firmware Activate/Download: Not Supported 00:08:00.146 Namespace Management: Supported 00:08:00.146 Device Self-Test: Not Supported 00:08:00.146 Directives: Supported 00:08:00.146 NVMe-MI: Not Supported 00:08:00.146 Virtualization Management: Not Supported 00:08:00.146 Doorbell Buffer Config: Supported 00:08:00.146 Get LBA Status Capability: Not Supported 00:08:00.146 Command & Feature Lockdown Capability: Not Supported 00:08:00.146 Abort Command Limit: 4 00:08:00.146 Async Event Request Limit: 4 00:08:00.146 Number of Firmware Slots: N/A 00:08:00.146 Firmware Slot 1 Read-Only: N/A 00:08:00.146 Firmware Activation Without Reset: N/A 00:08:00.146 Multiple Update Detection Support: N/A 00:08:00.146 Firmware Update Granularity: No Information Provided 00:08:00.146 Per-Namespace SMART Log: Yes 00:08:00.146 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.146 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:00.146 Command Effects Log Page: Supported 00:08:00.146 Get Log Page Extended Data: Supported 00:08:00.146 Telemetry Log Pages: Not Supported 00:08:00.146 Persistent Event Log Pages: Not Supported 00:08:00.146 Supported Log Pages Log Page: May Support 00:08:00.146 Commands Supported & Effects Log Page: Not Supported 00:08:00.146 Feature Identifiers & Effects Log Page:May Support 00:08:00.146 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.146 Data Area 4 for Telemetry Log: Not Supported 00:08:00.146 Error Log Page Entries Supported: 1 00:08:00.146 Keep Alive: Not Supported 00:08:00.146 00:08:00.146 NVM Command Set Attributes 00:08:00.146 ========================== 00:08:00.146 Submission Queue Entry Size 00:08:00.146 Max: 64 00:08:00.146 Min: 64 00:08:00.146 Completion Queue Entry Size 00:08:00.146 Max: 16 00:08:00.146 Min: 16 00:08:00.146 Number of Namespaces: 256 00:08:00.146 Compare Command: Supported 00:08:00.146 Write Uncorrectable Command: Not Supported 00:08:00.146 Dataset Management Command: Supported 00:08:00.146 Write Zeroes Command: Supported 00:08:00.146 Set Features Save Field: Supported 00:08:00.146 Reservations: Not Supported 00:08:00.146 Timestamp: Supported 00:08:00.146 Copy: Supported 00:08:00.146 Volatile Write Cache: Present 00:08:00.146 Atomic Write Unit (Normal): 1 00:08:00.146 Atomic Write Unit (PFail): 1 00:08:00.146 Atomic Compare & Write Unit: 1 00:08:00.146 Fused Compare & Write: Not Supported 00:08:00.146 Scatter-Gather List 00:08:00.146 SGL Command Set: Supported 00:08:00.146 SGL Keyed: Not Supported 00:08:00.146 SGL Bit Bucket Descriptor: Not Supported 00:08:00.146 SGL Metadata Pointer: Not Supported 00:08:00.146 Oversized SGL: Not Supported 00:08:00.146 SGL Metadata Address: Not Supported 00:08:00.146 SGL Offset: Not Supported 00:08:00.146 Transport SGL Data Block: Not Supported 00:08:00.146 Replay Protected Memory Block: Not Supported 00:08:00.146 00:08:00.146 Firmware Slot Information 00:08:00.146 ========================= 00:08:00.146 Active slot: 1 00:08:00.146 Slot 1 Firmware Revision: 1.0 00:08:00.146 00:08:00.146 00:08:00.146 Commands Supported and Effects 00:08:00.146 ============================== 00:08:00.146 Admin Commands 00:08:00.146 -------------- 00:08:00.146 Delete I/O Submission Queue (00h): Supported 00:08:00.146 Create I/O Submission Queue (01h): Supported 00:08:00.146 Get Log Page (02h): Supported 00:08:00.146 Delete I/O Completion Queue (04h): Supported 00:08:00.146 Create I/O Completion Queue (05h): Supported 00:08:00.146 Identify (06h): Supported 00:08:00.146 Abort (08h): Supported 00:08:00.146 Set Features (09h): Supported 00:08:00.146 Get Features (0Ah): Supported 00:08:00.146 Asynchronous Event Request (0Ch): Supported 00:08:00.146 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.146 Directive Send (19h): Supported 00:08:00.146 Directive Receive (1Ah): Supported 00:08:00.146 Virtualization Management (1Ch): Supported 00:08:00.146 Doorbell Buffer Config (7Ch): Supported 00:08:00.146 Format NVM (80h): Supported LBA-Change 00:08:00.146 I/O Commands 00:08:00.146 ------------ 00:08:00.146 Flush (00h): Supported LBA-Change 00:08:00.146 Write (01h): Supported LBA-Change 00:08:00.146 Read (02h): Supported 00:08:00.146 Compare (05h): Supported 00:08:00.146 Write Zeroes (08h): Supported LBA-Change 00:08:00.146 Dataset Management (09h): Supported LBA-Change 00:08:00.146 Unknown (0Ch): Supported 00:08:00.146 Unknown (12h): Supported 00:08:00.146 Copy (19h): Supported LBA-Change 00:08:00.146 Unknown (1Dh): Supported LBA-Change 00:08:00.146 00:08:00.146 Error Log 00:08:00.146 ========= 00:08:00.146 00:08:00.146 Arbitration 00:08:00.146 =========== 00:08:00.146 Arbitration Burst: no limit 00:08:00.146 00:08:00.146 Power Management 00:08:00.146 ================ 00:08:00.146 Number of Power States: 1 00:08:00.146 Current Power State: Power State #0 00:08:00.146 Power State #0: 00:08:00.146 Max Power: 25.00 W 00:08:00.146 Non-Operational State: Operational 00:08:00.147 Entry Latency: 16 microseconds 00:08:00.147 Exit Latency: 4 microseconds 00:08:00.147 Relative Read Throughput: 0 00:08:00.147 Relative Read Latency: 0 00:08:00.147 Relative Write Throughput: 0 00:08:00.147 Relative Write Latency: 0 00:08:00.147 Idle Power: Not Reported 00:08:00.147 Active Power: Not Reported 00:08:00.147 Non-Operational Permissive Mode: Not Supported 00:08:00.147 00:08:00.147 Health Information 00:08:00.147 ================== 00:08:00.147 Critical Warnings: 00:08:00.147 Available Spare Space: OK 00:08:00.147 Temperature: OK 00:08:00.147 Device Reliability: OK 00:08:00.147 Read Only: No 00:08:00.147 Volatile Memory Backup: OK 00:08:00.147 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.147 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.147 Available Spare: 0% 00:08:00.147 Available Spare Threshold: 0% 00:08:00.147 Life Percentage Used: 0% 00:08:00.147 Data Units Read: 2253 00:08:00.147 Data Units Written: 2040 00:08:00.147 Host Read Commands: 123142 00:08:00.147 Host Write Commands: 121411 00:08:00.147 Controller Busy Time: 0 minutes 00:08:00.147 Power Cycles: 0 00:08:00.147 Power On Hours: 0 hours 00:08:00.147 Unsafe Shutdowns: 0 00:08:00.147 Unrecoverable Media Errors: 0 00:08:00.147 Lifetime Error Log Entries: 0 00:08:00.147 Warning Temperature Time: 0 minutes 00:08:00.147 Critical Temperature Time: 0 minutes 00:08:00.147 00:08:00.147 Number of Queues 00:08:00.147 ================ 00:08:00.147 Number of I/O Submission Queues: 64 00:08:00.147 Number of I/O Completion Queues: 64 00:08:00.147 00:08:00.147 ZNS Specific Controller Data 00:08:00.147 ============================ 00:08:00.147 Zone Append Size Limit: 0 00:08:00.147 00:08:00.147 00:08:00.147 Active Namespaces 00:08:00.147 ================= 00:08:00.147 Namespace ID:1 00:08:00.147 Error Recovery Timeout: Unlimited 00:08:00.147 Command Set Identifier: NVM (00h) 00:08:00.147 Deallocate: Supported 00:08:00.147 Deallocated/Unwritten Error: Supported 00:08:00.147 Deallocated Read Value: All 0x00 00:08:00.147 Deallocate in Write Zeroes: Not Supported 00:08:00.147 Deallocated Guard Field: 0xFFFF 00:08:00.147 Flush: Supported 00:08:00.147 Reservation: Not Supported 00:08:00.147 Namespace Sharing Capabilities: Private 00:08:00.147 Size (in LBAs): 1048576 (4GiB) 00:08:00.147 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.147 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.147 Thin Provisioning: Not Supported 00:08:00.147 Per-NS Atomic Units: No 00:08:00.147 Maximum Single Source Range Length: 128 00:08:00.147 Maximum Copy Length: 128 00:08:00.147 Maximum Source Range Count: 128 00:08:00.147 NGUID/EUI64 Never Reused: No 00:08:00.147 Namespace Write Protected: No 00:08:00.147 Number of LBA Formats: 8 00:08:00.147 Current LBA Format: LBA Format #04 00:08:00.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.147 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.147 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.147 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.147 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.147 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.147 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.147 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.147 00:08:00.147 NVM Specific Namespace Data 00:08:00.147 =========================== 00:08:00.147 Logical Block Storage Tag Mask: 0 00:08:00.147 Protection Information Capabilities: 00:08:00.147 16b Guard Protection Information Storage Tag Support: No 00:08:00.147 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.147 Storage Tag Check Read Support: No 00:08:00.147 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Namespace ID:2 00:08:00.147 Error Recovery Timeout: Unlimited 00:08:00.147 Command Set Identifier: NVM (00h) 00:08:00.147 Deallocate: Supported 00:08:00.147 Deallocated/Unwritten Error: Supported 00:08:00.147 Deallocated Read Value: All 0x00 00:08:00.147 Deallocate in Write Zeroes: Not Supported 00:08:00.147 Deallocated Guard Field: 0xFFFF 00:08:00.147 Flush: Supported 00:08:00.147 Reservation: Not Supported 00:08:00.147 Namespace Sharing Capabilities: Private 00:08:00.147 Size (in LBAs): 1048576 (4GiB) 00:08:00.147 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.147 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.147 Thin Provisioning: Not Supported 00:08:00.147 Per-NS Atomic Units: No 00:08:00.147 Maximum Single Source Range Length: 128 00:08:00.147 Maximum Copy Length: 128 00:08:00.147 Maximum Source Range Count: 128 00:08:00.147 NGUID/EUI64 Never Reused: No 00:08:00.147 Namespace Write Protected: No 00:08:00.147 Number of LBA Formats: 8 00:08:00.147 Current LBA Format: LBA Format #04 00:08:00.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.147 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.147 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.147 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.147 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.147 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.147 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.147 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.147 00:08:00.147 NVM Specific Namespace Data 00:08:00.147 =========================== 00:08:00.147 Logical Block Storage Tag Mask: 0 00:08:00.147 Protection Information Capabilities: 00:08:00.147 16b Guard Protection Information Storage Tag Support: No 00:08:00.147 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.147 Storage Tag Check Read Support: No 00:08:00.147 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Namespace ID:3 00:08:00.147 Error Recovery Timeout: Unlimited 00:08:00.147 Command Set Identifier: NVM (00h) 00:08:00.147 Deallocate: Supported 00:08:00.147 Deallocated/Unwritten Error: Supported 00:08:00.147 Deallocated Read Value: All 0x00 00:08:00.147 Deallocate in Write Zeroes: Not Supported 00:08:00.147 Deallocated Guard Field: 0xFFFF 00:08:00.147 Flush: Supported 00:08:00.147 Reservation: Not Supported 00:08:00.147 Namespace Sharing Capabilities: Private 00:08:00.147 Size (in LBAs): 1048576 (4GiB) 00:08:00.147 Capacity (in LBAs): 1048576 (4GiB) 00:08:00.147 Utilization (in LBAs): 1048576 (4GiB) 00:08:00.147 Thin Provisioning: Not Supported 00:08:00.147 Per-NS Atomic Units: No 00:08:00.147 Maximum Single Source Range Length: 128 00:08:00.147 Maximum Copy Length: 128 00:08:00.147 Maximum Source Range Count: 128 00:08:00.147 NGUID/EUI64 Never Reused: No 00:08:00.147 Namespace Write Protected: No 00:08:00.147 Number of LBA Formats: 8 00:08:00.147 Current LBA Format: LBA Format #04 00:08:00.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.147 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.147 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.147 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.147 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.147 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.147 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.147 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.147 00:08:00.147 NVM Specific Namespace Data 00:08:00.147 =========================== 00:08:00.147 Logical Block Storage Tag Mask: 0 00:08:00.147 Protection Information Capabilities: 00:08:00.147 16b Guard Protection Information Storage Tag Support: No 00:08:00.147 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.147 Storage Tag Check Read Support: No 00:08:00.147 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.147 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.148 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.148 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.148 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.148 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.148 21:39:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:00.148 21:39:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:00.408 ===================================================== 00:08:00.408 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.408 ===================================================== 00:08:00.408 Controller Capabilities/Features 00:08:00.408 ================================ 00:08:00.408 Vendor ID: 1b36 00:08:00.408 Subsystem Vendor ID: 1af4 00:08:00.408 Serial Number: 12343 00:08:00.408 Model Number: QEMU NVMe Ctrl 00:08:00.408 Firmware Version: 8.0.0 00:08:00.408 Recommended Arb Burst: 6 00:08:00.408 IEEE OUI Identifier: 00 54 52 00:08:00.408 Multi-path I/O 00:08:00.408 May have multiple subsystem ports: No 00:08:00.408 May have multiple controllers: Yes 00:08:00.408 Associated with SR-IOV VF: No 00:08:00.408 Max Data Transfer Size: 524288 00:08:00.408 Max Number of Namespaces: 256 00:08:00.408 Max Number of I/O Queues: 64 00:08:00.408 NVMe Specification Version (VS): 1.4 00:08:00.408 NVMe Specification Version (Identify): 1.4 00:08:00.408 Maximum Queue Entries: 2048 00:08:00.408 Contiguous Queues Required: Yes 00:08:00.408 Arbitration Mechanisms Supported 00:08:00.408 Weighted Round Robin: Not Supported 00:08:00.408 Vendor Specific: Not Supported 00:08:00.408 Reset Timeout: 7500 ms 00:08:00.408 Doorbell Stride: 4 bytes 00:08:00.408 NVM Subsystem Reset: Not Supported 00:08:00.408 Command Sets Supported 00:08:00.408 NVM Command Set: Supported 00:08:00.408 Boot Partition: Not Supported 00:08:00.408 Memory Page Size Minimum: 4096 bytes 00:08:00.408 Memory Page Size Maximum: 65536 bytes 00:08:00.408 Persistent Memory Region: Not Supported 00:08:00.408 Optional Asynchronous Events Supported 00:08:00.408 Namespace Attribute Notices: Supported 00:08:00.408 Firmware Activation Notices: Not Supported 00:08:00.408 ANA Change Notices: Not Supported 00:08:00.408 PLE Aggregate Log Change Notices: Not Supported 00:08:00.408 LBA Status Info Alert Notices: Not Supported 00:08:00.408 EGE Aggregate Log Change Notices: Not Supported 00:08:00.408 Normal NVM Subsystem Shutdown event: Not Supported 00:08:00.408 Zone Descriptor Change Notices: Not Supported 00:08:00.408 Discovery Log Change Notices: Not Supported 00:08:00.408 Controller Attributes 00:08:00.408 128-bit Host Identifier: Not Supported 00:08:00.408 Non-Operational Permissive Mode: Not Supported 00:08:00.408 NVM Sets: Not Supported 00:08:00.408 Read Recovery Levels: Not Supported 00:08:00.408 Endurance Groups: Supported 00:08:00.408 Predictable Latency Mode: Not Supported 00:08:00.408 Traffic Based Keep ALive: Not Supported 00:08:00.408 Namespace Granularity: Not Supported 00:08:00.408 SQ Associations: Not Supported 00:08:00.408 UUID List: Not Supported 00:08:00.408 Multi-Domain Subsystem: Not Supported 00:08:00.408 Fixed Capacity Management: Not Supported 00:08:00.408 Variable Capacity Management: Not Supported 00:08:00.408 Delete Endurance Group: Not Supported 00:08:00.408 Delete NVM Set: Not Supported 00:08:00.408 Extended LBA Formats Supported: Supported 00:08:00.408 Flexible Data Placement Supported: Supported 00:08:00.408 00:08:00.408 Controller Memory Buffer Support 00:08:00.408 ================================ 00:08:00.408 Supported: No 00:08:00.408 00:08:00.408 Persistent Memory Region Support 00:08:00.408 ================================ 00:08:00.408 Supported: No 00:08:00.408 00:08:00.408 Admin Command Set Attributes 00:08:00.408 ============================ 00:08:00.408 Security Send/Receive: Not Supported 00:08:00.408 Format NVM: Supported 00:08:00.408 Firmware Activate/Download: Not Supported 00:08:00.408 Namespace Management: Supported 00:08:00.408 Device Self-Test: Not Supported 00:08:00.408 Directives: Supported 00:08:00.408 NVMe-MI: Not Supported 00:08:00.408 Virtualization Management: Not Supported 00:08:00.408 Doorbell Buffer Config: Supported 00:08:00.408 Get LBA Status Capability: Not Supported 00:08:00.408 Command & Feature Lockdown Capability: Not Supported 00:08:00.408 Abort Command Limit: 4 00:08:00.408 Async Event Request Limit: 4 00:08:00.408 Number of Firmware Slots: N/A 00:08:00.408 Firmware Slot 1 Read-Only: N/A 00:08:00.408 Firmware Activation Without Reset: N/A 00:08:00.408 Multiple Update Detection Support: N/A 00:08:00.408 Firmware Update Granularity: No Information Provided 00:08:00.408 Per-Namespace SMART Log: Yes 00:08:00.409 Asymmetric Namespace Access Log Page: Not Supported 00:08:00.409 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:00.409 Command Effects Log Page: Supported 00:08:00.409 Get Log Page Extended Data: Supported 00:08:00.409 Telemetry Log Pages: Not Supported 00:08:00.409 Persistent Event Log Pages: Not Supported 00:08:00.409 Supported Log Pages Log Page: May Support 00:08:00.409 Commands Supported & Effects Log Page: Not Supported 00:08:00.409 Feature Identifiers & Effects Log Page:May Support 00:08:00.409 NVMe-MI Commands & Effects Log Page: May Support 00:08:00.409 Data Area 4 for Telemetry Log: Not Supported 00:08:00.409 Error Log Page Entries Supported: 1 00:08:00.409 Keep Alive: Not Supported 00:08:00.409 00:08:00.409 NVM Command Set Attributes 00:08:00.409 ========================== 00:08:00.409 Submission Queue Entry Size 00:08:00.409 Max: 64 00:08:00.409 Min: 64 00:08:00.409 Completion Queue Entry Size 00:08:00.409 Max: 16 00:08:00.409 Min: 16 00:08:00.409 Number of Namespaces: 256 00:08:00.409 Compare Command: Supported 00:08:00.409 Write Uncorrectable Command: Not Supported 00:08:00.409 Dataset Management Command: Supported 00:08:00.409 Write Zeroes Command: Supported 00:08:00.409 Set Features Save Field: Supported 00:08:00.409 Reservations: Not Supported 00:08:00.409 Timestamp: Supported 00:08:00.409 Copy: Supported 00:08:00.409 Volatile Write Cache: Present 00:08:00.409 Atomic Write Unit (Normal): 1 00:08:00.409 Atomic Write Unit (PFail): 1 00:08:00.409 Atomic Compare & Write Unit: 1 00:08:00.409 Fused Compare & Write: Not Supported 00:08:00.409 Scatter-Gather List 00:08:00.409 SGL Command Set: Supported 00:08:00.409 SGL Keyed: Not Supported 00:08:00.409 SGL Bit Bucket Descriptor: Not Supported 00:08:00.409 SGL Metadata Pointer: Not Supported 00:08:00.409 Oversized SGL: Not Supported 00:08:00.409 SGL Metadata Address: Not Supported 00:08:00.409 SGL Offset: Not Supported 00:08:00.409 Transport SGL Data Block: Not Supported 00:08:00.409 Replay Protected Memory Block: Not Supported 00:08:00.409 00:08:00.409 Firmware Slot Information 00:08:00.409 ========================= 00:08:00.409 Active slot: 1 00:08:00.409 Slot 1 Firmware Revision: 1.0 00:08:00.409 00:08:00.409 00:08:00.409 Commands Supported and Effects 00:08:00.409 ============================== 00:08:00.409 Admin Commands 00:08:00.409 -------------- 00:08:00.409 Delete I/O Submission Queue (00h): Supported 00:08:00.409 Create I/O Submission Queue (01h): Supported 00:08:00.409 Get Log Page (02h): Supported 00:08:00.409 Delete I/O Completion Queue (04h): Supported 00:08:00.409 Create I/O Completion Queue (05h): Supported 00:08:00.409 Identify (06h): Supported 00:08:00.409 Abort (08h): Supported 00:08:00.409 Set Features (09h): Supported 00:08:00.409 Get Features (0Ah): Supported 00:08:00.409 Asynchronous Event Request (0Ch): Supported 00:08:00.409 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:00.409 Directive Send (19h): Supported 00:08:00.409 Directive Receive (1Ah): Supported 00:08:00.409 Virtualization Management (1Ch): Supported 00:08:00.409 Doorbell Buffer Config (7Ch): Supported 00:08:00.409 Format NVM (80h): Supported LBA-Change 00:08:00.409 I/O Commands 00:08:00.409 ------------ 00:08:00.409 Flush (00h): Supported LBA-Change 00:08:00.409 Write (01h): Supported LBA-Change 00:08:00.409 Read (02h): Supported 00:08:00.409 Compare (05h): Supported 00:08:00.409 Write Zeroes (08h): Supported LBA-Change 00:08:00.409 Dataset Management (09h): Supported LBA-Change 00:08:00.409 Unknown (0Ch): Supported 00:08:00.409 Unknown (12h): Supported 00:08:00.409 Copy (19h): Supported LBA-Change 00:08:00.409 Unknown (1Dh): Supported LBA-Change 00:08:00.409 00:08:00.409 Error Log 00:08:00.409 ========= 00:08:00.409 00:08:00.409 Arbitration 00:08:00.409 =========== 00:08:00.409 Arbitration Burst: no limit 00:08:00.409 00:08:00.409 Power Management 00:08:00.409 ================ 00:08:00.409 Number of Power States: 1 00:08:00.409 Current Power State: Power State #0 00:08:00.409 Power State #0: 00:08:00.409 Max Power: 25.00 W 00:08:00.409 Non-Operational State: Operational 00:08:00.409 Entry Latency: 16 microseconds 00:08:00.409 Exit Latency: 4 microseconds 00:08:00.409 Relative Read Throughput: 0 00:08:00.409 Relative Read Latency: 0 00:08:00.409 Relative Write Throughput: 0 00:08:00.409 Relative Write Latency: 0 00:08:00.409 Idle Power: Not Reported 00:08:00.409 Active Power: Not Reported 00:08:00.409 Non-Operational Permissive Mode: Not Supported 00:08:00.409 00:08:00.409 Health Information 00:08:00.409 ================== 00:08:00.409 Critical Warnings: 00:08:00.409 Available Spare Space: OK 00:08:00.409 Temperature: OK 00:08:00.409 Device Reliability: OK 00:08:00.409 Read Only: No 00:08:00.409 Volatile Memory Backup: OK 00:08:00.409 Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.409 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:00.409 Available Spare: 0% 00:08:00.409 Available Spare Threshold: 0% 00:08:00.409 Life Percentage Used: 0% 00:08:00.409 Data Units Read: 840 00:08:00.409 Data Units Written: 769 00:08:00.409 Host Read Commands: 41983 00:08:00.409 Host Write Commands: 41406 00:08:00.409 Controller Busy Time: 0 minutes 00:08:00.409 Power Cycles: 0 00:08:00.409 Power On Hours: 0 hours 00:08:00.409 Unsafe Shutdowns: 0 00:08:00.409 Unrecoverable Media Errors: 0 00:08:00.409 Lifetime Error Log Entries: 0 00:08:00.409 Warning Temperature Time: 0 minutes 00:08:00.410 Critical Temperature Time: 0 minutes 00:08:00.410 00:08:00.410 Number of Queues 00:08:00.410 ================ 00:08:00.410 Number of I/O Submission Queues: 64 00:08:00.410 Number of I/O Completion Queues: 64 00:08:00.410 00:08:00.410 ZNS Specific Controller Data 00:08:00.410 ============================ 00:08:00.410 Zone Append Size Limit: 0 00:08:00.410 00:08:00.410 00:08:00.410 Active Namespaces 00:08:00.410 ================= 00:08:00.410 Namespace ID:1 00:08:00.410 Error Recovery Timeout: Unlimited 00:08:00.410 Command Set Identifier: NVM (00h) 00:08:00.410 Deallocate: Supported 00:08:00.410 Deallocated/Unwritten Error: Supported 00:08:00.410 Deallocated Read Value: All 0x00 00:08:00.410 Deallocate in Write Zeroes: Not Supported 00:08:00.410 Deallocated Guard Field: 0xFFFF 00:08:00.410 Flush: Supported 00:08:00.410 Reservation: Not Supported 00:08:00.410 Namespace Sharing Capabilities: Multiple Controllers 00:08:00.410 Size (in LBAs): 262144 (1GiB) 00:08:00.410 Capacity (in LBAs): 262144 (1GiB) 00:08:00.410 Utilization (in LBAs): 262144 (1GiB) 00:08:00.410 Thin Provisioning: Not Supported 00:08:00.410 Per-NS Atomic Units: No 00:08:00.410 Maximum Single Source Range Length: 128 00:08:00.410 Maximum Copy Length: 128 00:08:00.410 Maximum Source Range Count: 128 00:08:00.410 NGUID/EUI64 Never Reused: No 00:08:00.410 Namespace Write Protected: No 00:08:00.410 Endurance group ID: 1 00:08:00.410 Number of LBA Formats: 8 00:08:00.410 Current LBA Format: LBA Format #04 00:08:00.410 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:00.410 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:00.410 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:00.410 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:00.410 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:00.410 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:00.410 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:00.410 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:00.410 00:08:00.410 Get Feature FDP: 00:08:00.410 ================ 00:08:00.410 Enabled: Yes 00:08:00.410 FDP configuration index: 0 00:08:00.410 00:08:00.410 FDP configurations log page 00:08:00.410 =========================== 00:08:00.410 Number of FDP configurations: 1 00:08:00.410 Version: 0 00:08:00.410 Size: 112 00:08:00.410 FDP Configuration Descriptor: 0 00:08:00.410 Descriptor Size: 96 00:08:00.410 Reclaim Group Identifier format: 2 00:08:00.410 FDP Volatile Write Cache: Not Present 00:08:00.410 FDP Configuration: Valid 00:08:00.410 Vendor Specific Size: 0 00:08:00.410 Number of Reclaim Groups: 2 00:08:00.410 Number of Recalim Unit Handles: 8 00:08:00.410 Max Placement Identifiers: 128 00:08:00.410 Number of Namespaces Suppprted: 256 00:08:00.410 Reclaim unit Nominal Size: 6000000 bytes 00:08:00.410 Estimated Reclaim Unit Time Limit: Not Reported 00:08:00.410 RUH Desc #000: RUH Type: Initially Isolated 00:08:00.410 RUH Desc #001: RUH Type: Initially Isolated 00:08:00.410 RUH Desc #002: RUH Type: Initially Isolated 00:08:00.410 RUH Desc #003: RUH Type: Initially Isolated 00:08:00.410 RUH Desc #004: RUH Type: Initially Isolated 00:08:00.410 RUH Desc #005: RUH Type: Initially Isolated 00:08:00.410 RUH Desc #006: RUH Type: Initially Isolated 00:08:00.410 RUH Desc #007: RUH Type: Initially Isolated 00:08:00.410 00:08:00.410 FDP reclaim unit handle usage log page 00:08:00.410 ====================================== 00:08:00.410 Number of Reclaim Unit Handles: 8 00:08:00.410 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:00.410 RUH Usage Desc #001: RUH Attributes: Unused 00:08:00.410 RUH Usage Desc #002: RUH Attributes: Unused 00:08:00.410 RUH Usage Desc #003: RUH Attributes: Unused 00:08:00.410 RUH Usage Desc #004: RUH Attributes: Unused 00:08:00.410 RUH Usage Desc #005: RUH Attributes: Unused 00:08:00.410 RUH Usage Desc #006: RUH Attributes: Unused 00:08:00.410 RUH Usage Desc #007: RUH Attributes: Unused 00:08:00.410 00:08:00.410 FDP statistics log page 00:08:00.410 ======================= 00:08:00.410 Host bytes with metadata written: 488349696 00:08:00.410 Media bytes with metadata written: 488394752 00:08:00.410 Media bytes erased: 0 00:08:00.410 00:08:00.410 FDP events log page 00:08:00.410 =================== 00:08:00.410 Number of FDP events: 0 00:08:00.410 00:08:00.410 NVM Specific Namespace Data 00:08:00.410 =========================== 00:08:00.410 Logical Block Storage Tag Mask: 0 00:08:00.410 Protection Information Capabilities: 00:08:00.410 16b Guard Protection Information Storage Tag Support: No 00:08:00.410 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:00.410 Storage Tag Check Read Support: No 00:08:00.410 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.410 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.410 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.410 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.410 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.410 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.410 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.410 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:00.410 00:08:00.410 real 0m1.159s 00:08:00.410 user 0m0.386s 00:08:00.410 sys 0m0.548s 00:08:00.410 21:39:19 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.410 21:39:19 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:00.410 ************************************ 00:08:00.410 END TEST nvme_identify 00:08:00.410 ************************************ 00:08:00.410 21:39:19 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:00.410 21:39:19 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.411 21:39:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.411 21:39:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.411 ************************************ 00:08:00.411 START TEST nvme_perf 00:08:00.411 ************************************ 00:08:00.411 21:39:19 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:00.411 21:39:19 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:01.795 Initializing NVMe Controllers 00:08:01.795 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:01.795 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:01.795 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:01.795 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:01.795 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:01.795 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:01.795 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:01.795 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:01.795 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:01.795 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:01.795 Initialization complete. Launching workers. 00:08:01.795 ======================================================== 00:08:01.795 Latency(us) 00:08:01.795 Device Information : IOPS MiB/s Average min max 00:08:01.795 PCIE (0000:00:10.0) NSID 1 from core 0: 14901.84 174.63 8600.65 5897.48 38225.10 00:08:01.795 PCIE (0000:00:11.0) NSID 1 from core 0: 14901.84 174.63 8589.09 5965.97 36468.01 00:08:01.795 PCIE (0000:00:13.0) NSID 1 from core 0: 14901.84 174.63 8576.25 6008.33 35237.00 00:08:01.795 PCIE (0000:00:12.0) NSID 1 from core 0: 14901.84 174.63 8563.10 5964.87 33499.33 00:08:01.795 PCIE (0000:00:12.0) NSID 2 from core 0: 14901.84 174.63 8549.76 6004.12 31744.61 00:08:01.795 PCIE (0000:00:12.0) NSID 3 from core 0: 14965.79 175.38 8499.97 5978.21 26480.14 00:08:01.795 ======================================================== 00:08:01.795 Total : 89474.98 1048.53 8563.09 5897.48 38225.10 00:08:01.795 00:08:01.795 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:01.795 ================================================================================= 00:08:01.795 1.00000% : 6074.683us 00:08:01.795 10.00000% : 6553.600us 00:08:01.795 25.00000% : 7410.609us 00:08:01.795 50.00000% : 8418.855us 00:08:01.795 75.00000% : 8922.978us 00:08:01.795 90.00000% : 10284.111us 00:08:01.795 95.00000% : 11544.418us 00:08:01.795 98.00000% : 13712.148us 00:08:01.795 99.00000% : 15426.166us 00:08:01.795 99.50000% : 32263.877us 00:08:01.795 99.90000% : 37910.055us 00:08:01.795 99.99000% : 38313.354us 00:08:01.795 99.99900% : 38313.354us 00:08:01.795 99.99990% : 38313.354us 00:08:01.795 99.99999% : 38313.354us 00:08:01.795 00:08:01.796 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:01.796 ================================================================================= 00:08:01.796 1.00000% : 6150.302us 00:08:01.796 10.00000% : 6553.600us 00:08:01.796 25.00000% : 7410.609us 00:08:01.796 50.00000% : 8418.855us 00:08:01.796 75.00000% : 8922.978us 00:08:01.796 90.00000% : 10284.111us 00:08:01.796 95.00000% : 11645.243us 00:08:01.796 98.00000% : 13712.148us 00:08:01.796 99.00000% : 16031.114us 00:08:01.796 99.50000% : 30852.332us 00:08:01.796 99.90000% : 36296.862us 00:08:01.796 99.99000% : 36498.511us 00:08:01.796 99.99900% : 36498.511us 00:08:01.796 99.99990% : 36498.511us 00:08:01.796 99.99999% : 36498.511us 00:08:01.796 00:08:01.796 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:01.796 ================================================================================= 00:08:01.796 1.00000% : 6150.302us 00:08:01.796 10.00000% : 6553.600us 00:08:01.796 25.00000% : 7410.609us 00:08:01.796 50.00000% : 8368.443us 00:08:01.796 75.00000% : 8922.978us 00:08:01.796 90.00000% : 10183.286us 00:08:01.796 95.00000% : 11746.068us 00:08:01.796 98.00000% : 13006.375us 00:08:01.796 99.00000% : 15930.289us 00:08:01.796 99.50000% : 30247.385us 00:08:01.796 99.90000% : 35086.966us 00:08:01.796 99.99000% : 35288.615us 00:08:01.796 99.99900% : 35288.615us 00:08:01.796 99.99990% : 35288.615us 00:08:01.796 99.99999% : 35288.615us 00:08:01.796 00:08:01.796 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:01.796 ================================================================================= 00:08:01.796 1.00000% : 6150.302us 00:08:01.796 10.00000% : 6553.600us 00:08:01.796 25.00000% : 7461.022us 00:08:01.796 50.00000% : 8368.443us 00:08:01.796 75.00000% : 8922.978us 00:08:01.796 90.00000% : 10233.698us 00:08:01.796 95.00000% : 11947.717us 00:08:01.796 98.00000% : 13308.849us 00:08:01.796 99.00000% : 15426.166us 00:08:01.796 99.50000% : 28634.191us 00:08:01.796 99.90000% : 33272.123us 00:08:01.796 99.99000% : 33675.422us 00:08:01.796 99.99900% : 33675.422us 00:08:01.796 99.99990% : 33675.422us 00:08:01.796 99.99999% : 33675.422us 00:08:01.796 00:08:01.796 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:01.796 ================================================================================= 00:08:01.796 1.00000% : 6150.302us 00:08:01.796 10.00000% : 6553.600us 00:08:01.796 25.00000% : 7461.022us 00:08:01.796 50.00000% : 8418.855us 00:08:01.796 75.00000% : 8922.978us 00:08:01.796 90.00000% : 10233.698us 00:08:01.796 95.00000% : 11796.480us 00:08:01.796 98.00000% : 13510.498us 00:08:01.796 99.00000% : 14619.569us 00:08:01.796 99.50000% : 27020.997us 00:08:01.796 99.90000% : 31457.280us 00:08:01.796 99.99000% : 31860.578us 00:08:01.796 99.99900% : 31860.578us 00:08:01.796 99.99990% : 31860.578us 00:08:01.796 99.99999% : 31860.578us 00:08:01.796 00:08:01.796 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:01.796 ================================================================================= 00:08:01.796 1.00000% : 6150.302us 00:08:01.796 10.00000% : 6553.600us 00:08:01.796 25.00000% : 7461.022us 00:08:01.796 50.00000% : 8418.855us 00:08:01.796 75.00000% : 8922.978us 00:08:01.796 90.00000% : 10284.111us 00:08:01.796 95.00000% : 11544.418us 00:08:01.796 98.00000% : 13611.323us 00:08:01.796 99.00000% : 14417.920us 00:08:01.796 99.50000% : 19862.449us 00:08:01.796 99.90000% : 26214.400us 00:08:01.796 99.99000% : 26617.698us 00:08:01.796 99.99900% : 26617.698us 00:08:01.796 99.99990% : 26617.698us 00:08:01.796 99.99999% : 26617.698us 00:08:01.796 00:08:01.796 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:01.796 ============================================================================== 00:08:01.796 Range in us Cumulative IO count 00:08:01.796 5873.034 - 5898.240: 0.0067% ( 1) 00:08:01.796 5898.240 - 5923.446: 0.0201% ( 2) 00:08:01.796 5923.446 - 5948.652: 0.0805% ( 9) 00:08:01.796 5948.652 - 5973.858: 0.1609% ( 12) 00:08:01.796 5973.858 - 5999.065: 0.3420% ( 27) 00:08:01.796 5999.065 - 6024.271: 0.6237% ( 42) 00:08:01.796 6024.271 - 6049.477: 0.8114% ( 28) 00:08:01.796 6049.477 - 6074.683: 1.1333% ( 48) 00:08:01.796 6074.683 - 6099.889: 1.5089% ( 56) 00:08:01.796 6099.889 - 6125.095: 1.9112% ( 60) 00:08:01.796 6125.095 - 6150.302: 2.3672% ( 68) 00:08:01.796 6150.302 - 6175.508: 2.7763% ( 61) 00:08:01.796 6175.508 - 6200.714: 3.2189% ( 66) 00:08:01.796 6200.714 - 6225.920: 3.6548% ( 65) 00:08:01.796 6225.920 - 6251.126: 4.0974% ( 66) 00:08:01.796 6251.126 - 6276.332: 4.6540% ( 83) 00:08:01.796 6276.332 - 6301.538: 5.1837% ( 79) 00:08:01.796 6301.538 - 6326.745: 5.6129% ( 64) 00:08:01.796 6326.745 - 6351.951: 6.0756% ( 69) 00:08:01.796 6351.951 - 6377.157: 6.5853% ( 76) 00:08:01.796 6377.157 - 6402.363: 7.1620% ( 86) 00:08:01.796 6402.363 - 6427.569: 7.6985% ( 80) 00:08:01.796 6427.569 - 6452.775: 8.2283% ( 79) 00:08:01.796 6452.775 - 6503.188: 9.3951% ( 174) 00:08:01.796 6503.188 - 6553.600: 10.4547% ( 158) 00:08:01.796 6553.600 - 6604.012: 11.5880% ( 169) 00:08:01.796 6604.012 - 6654.425: 12.7146% ( 168) 00:08:01.796 6654.425 - 6704.837: 13.7943% ( 161) 00:08:01.796 6704.837 - 6755.249: 14.9477% ( 172) 00:08:01.796 6755.249 - 6805.662: 16.0609% ( 166) 00:08:01.796 6805.662 - 6856.074: 17.2546% ( 178) 00:08:01.796 6856.074 - 6906.486: 18.4616% ( 180) 00:08:01.796 6906.486 - 6956.898: 19.6218% ( 173) 00:08:01.796 6956.898 - 7007.311: 20.7014% ( 161) 00:08:01.796 7007.311 - 7057.723: 21.6336% ( 139) 00:08:01.796 7057.723 - 7108.135: 22.3780% ( 111) 00:08:01.796 7108.135 - 7158.548: 22.9681% ( 88) 00:08:01.796 7158.548 - 7208.960: 23.4777% ( 76) 00:08:01.796 7208.960 - 7259.372: 23.8935% ( 62) 00:08:01.796 7259.372 - 7309.785: 24.3361% ( 66) 00:08:01.796 7309.785 - 7360.197: 24.7385% ( 60) 00:08:01.796 7360.197 - 7410.609: 25.0738% ( 50) 00:08:01.796 7410.609 - 7461.022: 25.3822% ( 46) 00:08:01.796 7461.022 - 7511.434: 25.8315% ( 67) 00:08:01.796 7511.434 - 7561.846: 26.2473% ( 62) 00:08:01.796 7561.846 - 7612.258: 26.8039% ( 83) 00:08:01.796 7612.258 - 7662.671: 27.3739% ( 85) 00:08:01.796 7662.671 - 7713.083: 27.9842% ( 91) 00:08:01.796 7713.083 - 7763.495: 28.7956% ( 121) 00:08:01.796 7763.495 - 7813.908: 29.6942% ( 134) 00:08:01.796 7813.908 - 7864.320: 30.7068% ( 151) 00:08:01.796 7864.320 - 7914.732: 32.1017% ( 208) 00:08:01.796 7914.732 - 7965.145: 33.6440% ( 230) 00:08:01.796 7965.145 - 8015.557: 35.3004% ( 247) 00:08:01.796 8015.557 - 8065.969: 36.9233% ( 242) 00:08:01.796 8065.969 - 8116.382: 38.6467% ( 257) 00:08:01.796 8116.382 - 8166.794: 40.6585% ( 300) 00:08:01.796 8166.794 - 8217.206: 42.6972% ( 304) 00:08:01.796 8217.206 - 8267.618: 44.8431% ( 320) 00:08:01.796 8267.618 - 8318.031: 47.3914% ( 380) 00:08:01.796 8318.031 - 8368.443: 49.6714% ( 340) 00:08:01.796 8368.443 - 8418.855: 52.1727% ( 373) 00:08:01.796 8418.855 - 8469.268: 54.8216% ( 395) 00:08:01.796 8469.268 - 8519.680: 57.3632% ( 379) 00:08:01.796 8519.680 - 8570.092: 59.7841% ( 361) 00:08:01.796 8570.092 - 8620.505: 62.1982% ( 360) 00:08:01.796 8620.505 - 8670.917: 64.7465% ( 380) 00:08:01.796 8670.917 - 8721.329: 67.1607% ( 360) 00:08:01.796 8721.329 - 8771.742: 69.4407% ( 340) 00:08:01.796 8771.742 - 8822.154: 71.4525% ( 300) 00:08:01.796 8822.154 - 8872.566: 73.5046% ( 306) 00:08:01.796 8872.566 - 8922.978: 75.1677% ( 248) 00:08:01.796 8922.978 - 8973.391: 76.8442% ( 250) 00:08:01.796 8973.391 - 9023.803: 78.1183% ( 190) 00:08:01.796 9023.803 - 9074.215: 79.5467% ( 213) 00:08:01.796 9074.215 - 9124.628: 80.7605% ( 181) 00:08:01.796 9124.628 - 9175.040: 81.7865% ( 153) 00:08:01.796 9175.040 - 9225.452: 82.6650% ( 131) 00:08:01.796 9225.452 - 9275.865: 83.4362% ( 115) 00:08:01.796 9275.865 - 9326.277: 84.0799% ( 96) 00:08:01.796 9326.277 - 9376.689: 84.5561% ( 71) 00:08:01.796 9376.689 - 9427.102: 85.0456% ( 73) 00:08:01.796 9427.102 - 9477.514: 85.5754% ( 79) 00:08:01.796 9477.514 - 9527.926: 85.8906% ( 47) 00:08:01.796 9527.926 - 9578.338: 86.2795% ( 58) 00:08:01.796 9578.338 - 9628.751: 86.6483% ( 55) 00:08:01.796 9628.751 - 9679.163: 86.8898% ( 36) 00:08:01.796 9679.163 - 9729.575: 87.1647% ( 41) 00:08:01.796 9729.575 - 9779.988: 87.4732% ( 46) 00:08:01.796 9779.988 - 9830.400: 87.7951% ( 48) 00:08:01.796 9830.400 - 9880.812: 88.1237% ( 49) 00:08:01.796 9880.812 - 9931.225: 88.4254% ( 45) 00:08:01.796 9931.225 - 9981.637: 88.7272% ( 45) 00:08:01.796 9981.637 - 10032.049: 88.9418% ( 32) 00:08:01.796 10032.049 - 10082.462: 89.2167% ( 41) 00:08:01.796 10082.462 - 10132.874: 89.4649% ( 37) 00:08:01.796 10132.874 - 10183.286: 89.7063% ( 36) 00:08:01.796 10183.286 - 10233.698: 89.9410% ( 35) 00:08:01.796 10233.698 - 10284.111: 90.2025% ( 39) 00:08:01.796 10284.111 - 10334.523: 90.4305% ( 34) 00:08:01.796 10334.523 - 10384.935: 90.6585% ( 34) 00:08:01.796 10384.935 - 10435.348: 90.8865% ( 34) 00:08:01.796 10435.348 - 10485.760: 91.1011% ( 32) 00:08:01.796 10485.760 - 10536.172: 91.3694% ( 40) 00:08:01.796 10536.172 - 10586.585: 91.6175% ( 37) 00:08:01.796 10586.585 - 10636.997: 91.8321% ( 32) 00:08:01.796 10636.997 - 10687.409: 92.0869% ( 38) 00:08:01.796 10687.409 - 10737.822: 92.2881% ( 30) 00:08:01.796 10737.822 - 10788.234: 92.5228% ( 35) 00:08:01.796 10788.234 - 10838.646: 92.7173% ( 29) 00:08:01.796 10838.646 - 10889.058: 92.8916% ( 26) 00:08:01.796 10889.058 - 10939.471: 93.0794% ( 28) 00:08:01.796 10939.471 - 10989.883: 93.2672% ( 28) 00:08:01.796 10989.883 - 11040.295: 93.4482% ( 27) 00:08:01.796 11040.295 - 11090.708: 93.6427% ( 29) 00:08:01.796 11090.708 - 11141.120: 93.8104% ( 25) 00:08:01.796 11141.120 - 11191.532: 93.9646% ( 23) 00:08:01.796 11191.532 - 11241.945: 94.1121% ( 22) 00:08:01.797 11241.945 - 11292.357: 94.2597% ( 22) 00:08:01.797 11292.357 - 11342.769: 94.4675% ( 31) 00:08:01.797 11342.769 - 11393.182: 94.6486% ( 27) 00:08:01.797 11393.182 - 11443.594: 94.8297% ( 27) 00:08:01.797 11443.594 - 11494.006: 94.9638% ( 20) 00:08:01.797 11494.006 - 11544.418: 95.1046% ( 21) 00:08:01.797 11544.418 - 11594.831: 95.1985% ( 14) 00:08:01.797 11594.831 - 11645.243: 95.4064% ( 31) 00:08:01.797 11645.243 - 11695.655: 95.5003% ( 14) 00:08:01.797 11695.655 - 11746.068: 95.6009% ( 15) 00:08:01.797 11746.068 - 11796.480: 95.7484% ( 22) 00:08:01.797 11796.480 - 11846.892: 95.8825% ( 20) 00:08:01.797 11846.892 - 11897.305: 95.9965% ( 17) 00:08:01.797 11897.305 - 11947.717: 96.1172% ( 18) 00:08:01.797 11947.717 - 11998.129: 96.2513% ( 20) 00:08:01.797 11998.129 - 12048.542: 96.3788% ( 19) 00:08:01.797 12048.542 - 12098.954: 96.5062% ( 19) 00:08:01.797 12098.954 - 12149.366: 96.5866% ( 12) 00:08:01.797 12149.366 - 12199.778: 96.6872% ( 15) 00:08:01.797 12199.778 - 12250.191: 96.7610% ( 11) 00:08:01.797 12250.191 - 12300.603: 96.8415% ( 12) 00:08:01.797 12300.603 - 12351.015: 96.9085% ( 10) 00:08:01.797 12351.015 - 12401.428: 96.9354% ( 4) 00:08:01.797 12401.428 - 12451.840: 96.9823% ( 7) 00:08:01.797 12451.840 - 12502.252: 97.0225% ( 6) 00:08:01.797 12502.252 - 12552.665: 97.0427% ( 3) 00:08:01.797 12552.665 - 12603.077: 97.0896% ( 7) 00:08:01.797 12603.077 - 12653.489: 97.1097% ( 3) 00:08:01.797 12653.489 - 12703.902: 97.1432% ( 5) 00:08:01.797 12703.902 - 12754.314: 97.1634% ( 3) 00:08:01.797 12754.314 - 12804.726: 97.2237% ( 9) 00:08:01.797 12804.726 - 12855.138: 97.2371% ( 2) 00:08:01.797 12855.138 - 12905.551: 97.2707% ( 5) 00:08:01.797 12905.551 - 13006.375: 97.3645% ( 14) 00:08:01.797 13006.375 - 13107.200: 97.4517% ( 13) 00:08:01.797 13107.200 - 13208.025: 97.5456% ( 14) 00:08:01.797 13208.025 - 13308.849: 97.6529% ( 16) 00:08:01.797 13308.849 - 13409.674: 97.7736% ( 18) 00:08:01.797 13409.674 - 13510.498: 97.8943% ( 18) 00:08:01.797 13510.498 - 13611.323: 97.9949% ( 15) 00:08:01.797 13611.323 - 13712.148: 98.0955% ( 15) 00:08:01.797 13712.148 - 13812.972: 98.1961% ( 15) 00:08:01.797 13812.972 - 13913.797: 98.2967% ( 15) 00:08:01.797 13913.797 - 14014.622: 98.3704% ( 11) 00:08:01.797 14014.622 - 14115.446: 98.4576% ( 13) 00:08:01.797 14115.446 - 14216.271: 98.5448% ( 13) 00:08:01.797 14216.271 - 14317.095: 98.5850% ( 6) 00:08:01.797 14317.095 - 14417.920: 98.6387% ( 8) 00:08:01.797 14417.920 - 14518.745: 98.6856% ( 7) 00:08:01.797 14518.745 - 14619.569: 98.7527% ( 10) 00:08:01.797 14619.569 - 14720.394: 98.8197% ( 10) 00:08:01.797 14720.394 - 14821.218: 98.8868% ( 10) 00:08:01.797 14821.218 - 14922.043: 98.9203% ( 5) 00:08:01.797 14922.043 - 15022.868: 98.9337% ( 2) 00:08:01.797 15022.868 - 15123.692: 98.9539% ( 3) 00:08:01.797 15123.692 - 15224.517: 98.9740% ( 3) 00:08:01.797 15224.517 - 15325.342: 98.9941% ( 3) 00:08:01.797 15325.342 - 15426.166: 99.0209% ( 4) 00:08:01.797 15426.166 - 15526.991: 99.0343% ( 2) 00:08:01.797 15526.991 - 15627.815: 99.0612% ( 4) 00:08:01.797 15627.815 - 15728.640: 99.0813% ( 3) 00:08:01.797 15728.640 - 15829.465: 99.1014% ( 3) 00:08:01.797 15829.465 - 15930.289: 99.1215% ( 3) 00:08:01.797 15930.289 - 16031.114: 99.1349% ( 2) 00:08:01.797 16031.114 - 16131.938: 99.1416% ( 1) 00:08:01.797 30650.683 - 30852.332: 99.1819% ( 6) 00:08:01.797 30852.332 - 31053.982: 99.2288% ( 7) 00:08:01.797 31053.982 - 31255.631: 99.2825% ( 8) 00:08:01.797 31255.631 - 31457.280: 99.3361% ( 8) 00:08:01.797 31457.280 - 31658.929: 99.3898% ( 8) 00:08:01.797 31658.929 - 31860.578: 99.4434% ( 8) 00:08:01.797 31860.578 - 32062.228: 99.4970% ( 8) 00:08:01.797 32062.228 - 32263.877: 99.5507% ( 8) 00:08:01.797 32263.877 - 32465.526: 99.5708% ( 3) 00:08:01.797 36498.511 - 36700.160: 99.5909% ( 3) 00:08:01.797 36700.160 - 36901.809: 99.6446% ( 8) 00:08:01.797 36901.809 - 37103.458: 99.6982% ( 8) 00:08:01.797 37103.458 - 37305.108: 99.7519% ( 8) 00:08:01.797 37305.108 - 37506.757: 99.8055% ( 8) 00:08:01.797 37506.757 - 37708.406: 99.8659% ( 9) 00:08:01.797 37708.406 - 37910.055: 99.9195% ( 8) 00:08:01.797 37910.055 - 38111.705: 99.9732% ( 8) 00:08:01.797 38111.705 - 38313.354: 100.0000% ( 4) 00:08:01.797 00:08:01.797 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:01.797 ============================================================================== 00:08:01.797 Range in us Cumulative IO count 00:08:01.797 5948.652 - 5973.858: 0.0134% ( 2) 00:08:01.797 5973.858 - 5999.065: 0.0201% ( 1) 00:08:01.797 5999.065 - 6024.271: 0.0536% ( 5) 00:08:01.797 6024.271 - 6049.477: 0.1945% ( 21) 00:08:01.797 6049.477 - 6074.683: 0.3219% ( 19) 00:08:01.797 6074.683 - 6099.889: 0.5231% ( 30) 00:08:01.797 6099.889 - 6125.095: 0.7444% ( 33) 00:08:01.797 6125.095 - 6150.302: 1.0260% ( 42) 00:08:01.797 6150.302 - 6175.508: 1.4150% ( 58) 00:08:01.797 6175.508 - 6200.714: 1.9447% ( 79) 00:08:01.797 6200.714 - 6225.920: 2.5617% ( 92) 00:08:01.797 6225.920 - 6251.126: 3.1049% ( 81) 00:08:01.797 6251.126 - 6276.332: 3.6011% ( 74) 00:08:01.797 6276.332 - 6301.538: 4.1041% ( 75) 00:08:01.797 6301.538 - 6326.745: 4.7479% ( 96) 00:08:01.797 6326.745 - 6351.951: 5.3648% ( 92) 00:08:01.797 6351.951 - 6377.157: 5.9482% ( 87) 00:08:01.797 6377.157 - 6402.363: 6.4847% ( 80) 00:08:01.797 6402.363 - 6427.569: 7.1218% ( 95) 00:08:01.797 6427.569 - 6452.775: 7.8125% ( 103) 00:08:01.797 6452.775 - 6503.188: 9.0397% ( 183) 00:08:01.797 6503.188 - 6553.600: 10.3742% ( 199) 00:08:01.797 6553.600 - 6604.012: 11.7489% ( 205) 00:08:01.797 6604.012 - 6654.425: 13.0767% ( 198) 00:08:01.797 6654.425 - 6704.837: 14.5118% ( 214) 00:08:01.797 6704.837 - 6755.249: 15.8597% ( 201) 00:08:01.797 6755.249 - 6805.662: 17.2143% ( 202) 00:08:01.797 6805.662 - 6856.074: 18.5153% ( 194) 00:08:01.797 6856.074 - 6906.486: 19.7090% ( 178) 00:08:01.797 6906.486 - 6956.898: 20.6813% ( 145) 00:08:01.797 6956.898 - 7007.311: 21.4861% ( 120) 00:08:01.797 7007.311 - 7057.723: 22.1030% ( 92) 00:08:01.797 7057.723 - 7108.135: 22.6931% ( 88) 00:08:01.797 7108.135 - 7158.548: 23.1558% ( 69) 00:08:01.797 7158.548 - 7208.960: 23.6052% ( 67) 00:08:01.797 7208.960 - 7259.372: 24.1416% ( 80) 00:08:01.797 7259.372 - 7309.785: 24.5172% ( 56) 00:08:01.797 7309.785 - 7360.197: 24.8122% ( 44) 00:08:01.797 7360.197 - 7410.609: 25.2079% ( 59) 00:08:01.797 7410.609 - 7461.022: 25.5767% ( 55) 00:08:01.797 7461.022 - 7511.434: 25.9992% ( 63) 00:08:01.797 7511.434 - 7561.846: 26.4284% ( 64) 00:08:01.797 7561.846 - 7612.258: 26.9246% ( 74) 00:08:01.797 7612.258 - 7662.671: 27.4410% ( 77) 00:08:01.797 7662.671 - 7713.083: 27.9909% ( 82) 00:08:01.797 7713.083 - 7763.495: 28.6883% ( 104) 00:08:01.797 7763.495 - 7813.908: 29.4729% ( 117) 00:08:01.797 7813.908 - 7864.320: 30.3983% ( 138) 00:08:01.797 7864.320 - 7914.732: 31.4445% ( 156) 00:08:01.797 7914.732 - 7965.145: 32.6717% ( 183) 00:08:01.797 7965.145 - 8015.557: 34.2409% ( 234) 00:08:01.797 8015.557 - 8065.969: 35.9509% ( 255) 00:08:01.797 8065.969 - 8116.382: 37.8085% ( 277) 00:08:01.797 8116.382 - 8166.794: 39.7800% ( 294) 00:08:01.797 8166.794 - 8217.206: 42.0198% ( 334) 00:08:01.797 8217.206 - 8267.618: 44.3535% ( 348) 00:08:01.797 8267.618 - 8318.031: 46.8012% ( 365) 00:08:01.797 8318.031 - 8368.443: 49.3696% ( 383) 00:08:01.797 8368.443 - 8418.855: 52.1325% ( 412) 00:08:01.797 8418.855 - 8469.268: 54.8686% ( 408) 00:08:01.797 8469.268 - 8519.680: 57.7454% ( 429) 00:08:01.797 8519.680 - 8570.092: 60.3943% ( 395) 00:08:01.797 8570.092 - 8620.505: 63.2310% ( 423) 00:08:01.797 8620.505 - 8670.917: 65.9335% ( 403) 00:08:01.797 8670.917 - 8721.329: 68.4214% ( 371) 00:08:01.797 8721.329 - 8771.742: 70.6813% ( 337) 00:08:01.797 8771.742 - 8822.154: 72.7133% ( 303) 00:08:01.797 8822.154 - 8872.566: 74.6312% ( 286) 00:08:01.797 8872.566 - 8922.978: 76.4418% ( 270) 00:08:01.797 8922.978 - 8973.391: 78.0311% ( 237) 00:08:01.797 8973.391 - 9023.803: 79.3991% ( 204) 00:08:01.797 9023.803 - 9074.215: 80.6465% ( 186) 00:08:01.797 9074.215 - 9124.628: 81.7731% ( 168) 00:08:01.797 9124.628 - 9175.040: 82.6784% ( 135) 00:08:01.797 9175.040 - 9225.452: 83.4026% ( 108) 00:08:01.797 9225.452 - 9275.865: 84.0062% ( 90) 00:08:01.797 9275.865 - 9326.277: 84.5561% ( 82) 00:08:01.797 9326.277 - 9376.689: 85.0791% ( 78) 00:08:01.797 9376.689 - 9427.102: 85.4815% ( 60) 00:08:01.797 9427.102 - 9477.514: 85.8034% ( 48) 00:08:01.797 9477.514 - 9527.926: 86.1521% ( 52) 00:08:01.797 9527.926 - 9578.338: 86.4606% ( 46) 00:08:01.797 9578.338 - 9628.751: 86.7288% ( 40) 00:08:01.797 9628.751 - 9679.163: 86.9769% ( 37) 00:08:01.797 9679.163 - 9729.575: 87.2519% ( 41) 00:08:01.797 9729.575 - 9779.988: 87.5201% ( 40) 00:08:01.797 9779.988 - 9830.400: 87.7615% ( 36) 00:08:01.797 9830.400 - 9880.812: 88.0298% ( 40) 00:08:01.797 9880.812 - 9931.225: 88.3047% ( 41) 00:08:01.797 9931.225 - 9981.637: 88.6199% ( 47) 00:08:01.797 9981.637 - 10032.049: 88.9150% ( 44) 00:08:01.797 10032.049 - 10082.462: 89.2234% ( 46) 00:08:01.797 10082.462 - 10132.874: 89.5051% ( 42) 00:08:01.797 10132.874 - 10183.286: 89.7532% ( 37) 00:08:01.797 10183.286 - 10233.698: 89.9678% ( 32) 00:08:01.797 10233.698 - 10284.111: 90.2159% ( 37) 00:08:01.797 10284.111 - 10334.523: 90.4842% ( 40) 00:08:01.797 10334.523 - 10384.935: 90.7189% ( 35) 00:08:01.797 10384.935 - 10435.348: 90.9268% ( 31) 00:08:01.798 10435.348 - 10485.760: 91.1481% ( 33) 00:08:01.798 10485.760 - 10536.172: 91.4096% ( 39) 00:08:01.798 10536.172 - 10586.585: 91.6376% ( 34) 00:08:01.798 10586.585 - 10636.997: 91.8455% ( 31) 00:08:01.798 10636.997 - 10687.409: 92.0802% ( 35) 00:08:01.798 10687.409 - 10737.822: 92.3082% ( 34) 00:08:01.798 10737.822 - 10788.234: 92.5027% ( 29) 00:08:01.798 10788.234 - 10838.646: 92.7106% ( 31) 00:08:01.798 10838.646 - 10889.058: 92.8715% ( 24) 00:08:01.798 10889.058 - 10939.471: 93.0056% ( 20) 00:08:01.798 10939.471 - 10989.883: 93.1330% ( 19) 00:08:01.798 10989.883 - 11040.295: 93.2538% ( 18) 00:08:01.798 11040.295 - 11090.708: 93.3678% ( 17) 00:08:01.798 11090.708 - 11141.120: 93.4818% ( 17) 00:08:01.798 11141.120 - 11191.532: 93.5958% ( 17) 00:08:01.798 11191.532 - 11241.945: 93.7433% ( 22) 00:08:01.798 11241.945 - 11292.357: 93.8841% ( 21) 00:08:01.798 11292.357 - 11342.769: 94.0451% ( 24) 00:08:01.798 11342.769 - 11393.182: 94.1993% ( 23) 00:08:01.798 11393.182 - 11443.594: 94.3737% ( 26) 00:08:01.798 11443.594 - 11494.006: 94.5614% ( 28) 00:08:01.798 11494.006 - 11544.418: 94.7157% ( 23) 00:08:01.798 11544.418 - 11594.831: 94.8699% ( 23) 00:08:01.798 11594.831 - 11645.243: 95.0241% ( 23) 00:08:01.798 11645.243 - 11695.655: 95.1516% ( 19) 00:08:01.798 11695.655 - 11746.068: 95.2790% ( 19) 00:08:01.798 11746.068 - 11796.480: 95.4198% ( 21) 00:08:01.798 11796.480 - 11846.892: 95.5807% ( 24) 00:08:01.798 11846.892 - 11897.305: 95.7149% ( 20) 00:08:01.798 11897.305 - 11947.717: 95.8423% ( 19) 00:08:01.798 11947.717 - 11998.129: 95.9563% ( 17) 00:08:01.798 11998.129 - 12048.542: 96.0770% ( 18) 00:08:01.798 12048.542 - 12098.954: 96.2312% ( 23) 00:08:01.798 12098.954 - 12149.366: 96.3586% ( 19) 00:08:01.798 12149.366 - 12199.778: 96.4995% ( 21) 00:08:01.798 12199.778 - 12250.191: 96.6135% ( 17) 00:08:01.798 12250.191 - 12300.603: 96.7543% ( 21) 00:08:01.798 12300.603 - 12351.015: 96.8616% ( 16) 00:08:01.798 12351.015 - 12401.428: 96.9421% ( 12) 00:08:01.798 12401.428 - 12451.840: 97.0359% ( 14) 00:08:01.798 12451.840 - 12502.252: 97.1365% ( 15) 00:08:01.798 12502.252 - 12552.665: 97.2237% ( 13) 00:08:01.798 12552.665 - 12603.077: 97.2841% ( 9) 00:08:01.798 12603.077 - 12653.489: 97.3109% ( 4) 00:08:01.798 12653.489 - 12703.902: 97.3377% ( 4) 00:08:01.798 12703.902 - 12754.314: 97.3578% ( 3) 00:08:01.798 12754.314 - 12804.726: 97.3847% ( 4) 00:08:01.798 12804.726 - 12855.138: 97.4115% ( 4) 00:08:01.798 12855.138 - 12905.551: 97.4517% ( 6) 00:08:01.798 12905.551 - 13006.375: 97.5054% ( 8) 00:08:01.798 13006.375 - 13107.200: 97.5523% ( 7) 00:08:01.798 13107.200 - 13208.025: 97.6194% ( 10) 00:08:01.798 13208.025 - 13308.849: 97.7133% ( 14) 00:08:01.798 13308.849 - 13409.674: 97.8004% ( 13) 00:08:01.798 13409.674 - 13510.498: 97.8943% ( 14) 00:08:01.798 13510.498 - 13611.323: 97.9882% ( 14) 00:08:01.798 13611.323 - 13712.148: 98.0754% ( 13) 00:08:01.798 13712.148 - 13812.972: 98.1357% ( 9) 00:08:01.798 13812.972 - 13913.797: 98.1961% ( 9) 00:08:01.798 13913.797 - 14014.622: 98.2967% ( 15) 00:08:01.798 14014.622 - 14115.446: 98.3839% ( 13) 00:08:01.798 14115.446 - 14216.271: 98.4509% ( 10) 00:08:01.798 14216.271 - 14317.095: 98.4979% ( 7) 00:08:01.798 14317.095 - 14417.920: 98.5515% ( 8) 00:08:01.798 14417.920 - 14518.745: 98.5984% ( 7) 00:08:01.798 14518.745 - 14619.569: 98.6454% ( 7) 00:08:01.798 14619.569 - 14720.394: 98.6990% ( 8) 00:08:01.798 14720.394 - 14821.218: 98.7124% ( 2) 00:08:01.798 15224.517 - 15325.342: 98.7393% ( 4) 00:08:01.798 15325.342 - 15426.166: 98.7862% ( 7) 00:08:01.798 15426.166 - 15526.991: 98.8197% ( 5) 00:08:01.798 15526.991 - 15627.815: 98.8332% ( 2) 00:08:01.798 15627.815 - 15728.640: 98.8734% ( 6) 00:08:01.798 15728.640 - 15829.465: 98.9270% ( 8) 00:08:01.798 15829.465 - 15930.289: 98.9740% ( 7) 00:08:01.798 15930.289 - 16031.114: 99.0276% ( 8) 00:08:01.798 16031.114 - 16131.938: 99.0813% ( 8) 00:08:01.798 16131.938 - 16232.763: 99.1282% ( 7) 00:08:01.798 16232.763 - 16333.588: 99.1416% ( 2) 00:08:01.798 29440.788 - 29642.437: 99.1685% ( 4) 00:08:01.798 29642.437 - 29844.086: 99.2221% ( 8) 00:08:01.798 29844.086 - 30045.735: 99.2758% ( 8) 00:08:01.798 30045.735 - 30247.385: 99.3361% ( 9) 00:08:01.798 30247.385 - 30449.034: 99.3898% ( 8) 00:08:01.798 30449.034 - 30650.683: 99.4501% ( 9) 00:08:01.798 30650.683 - 30852.332: 99.5105% ( 9) 00:08:01.798 30852.332 - 31053.982: 99.5641% ( 8) 00:08:01.798 31053.982 - 31255.631: 99.5708% ( 1) 00:08:01.798 34885.317 - 35086.966: 99.5976% ( 4) 00:08:01.798 35086.966 - 35288.615: 99.6580% ( 9) 00:08:01.798 35288.615 - 35490.265: 99.7116% ( 8) 00:08:01.798 35490.265 - 35691.914: 99.7720% ( 9) 00:08:01.798 35691.914 - 35893.563: 99.8256% ( 8) 00:08:01.798 35893.563 - 36095.212: 99.8860% ( 9) 00:08:01.798 36095.212 - 36296.862: 99.9464% ( 9) 00:08:01.798 36296.862 - 36498.511: 100.0000% ( 8) 00:08:01.798 00:08:01.798 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:01.798 ============================================================================== 00:08:01.798 Range in us Cumulative IO count 00:08:01.798 5999.065 - 6024.271: 0.0335% ( 5) 00:08:01.798 6024.271 - 6049.477: 0.1073% ( 11) 00:08:01.798 6049.477 - 6074.683: 0.1811% ( 11) 00:08:01.798 6074.683 - 6099.889: 0.2749% ( 14) 00:08:01.798 6099.889 - 6125.095: 0.8181% ( 81) 00:08:01.798 6125.095 - 6150.302: 1.1400% ( 48) 00:08:01.798 6150.302 - 6175.508: 1.5625% ( 63) 00:08:01.798 6175.508 - 6200.714: 2.0185% ( 68) 00:08:01.798 6200.714 - 6225.920: 2.5080% ( 73) 00:08:01.798 6225.920 - 6251.126: 3.1719% ( 99) 00:08:01.798 6251.126 - 6276.332: 3.7285% ( 83) 00:08:01.798 6276.332 - 6301.538: 4.2986% ( 85) 00:08:01.798 6301.538 - 6326.745: 4.8283% ( 79) 00:08:01.798 6326.745 - 6351.951: 5.3715% ( 81) 00:08:01.798 6351.951 - 6377.157: 6.0019% ( 94) 00:08:01.798 6377.157 - 6402.363: 6.6591% ( 98) 00:08:01.798 6402.363 - 6427.569: 7.3364% ( 101) 00:08:01.798 6427.569 - 6452.775: 8.0070% ( 100) 00:08:01.798 6452.775 - 6503.188: 9.2342% ( 183) 00:08:01.798 6503.188 - 6553.600: 10.4681% ( 184) 00:08:01.798 6553.600 - 6604.012: 11.9166% ( 216) 00:08:01.798 6604.012 - 6654.425: 13.2041% ( 192) 00:08:01.798 6654.425 - 6704.837: 14.5319% ( 198) 00:08:01.798 6704.837 - 6755.249: 15.9134% ( 206) 00:08:01.798 6755.249 - 6805.662: 17.2881% ( 205) 00:08:01.798 6805.662 - 6856.074: 18.6159% ( 198) 00:08:01.798 6856.074 - 6906.486: 19.8833% ( 189) 00:08:01.798 6906.486 - 6956.898: 20.9831% ( 164) 00:08:01.798 6956.898 - 7007.311: 21.7476% ( 114) 00:08:01.798 7007.311 - 7057.723: 22.4048% ( 98) 00:08:01.798 7057.723 - 7108.135: 22.8943% ( 73) 00:08:01.798 7108.135 - 7158.548: 23.2698% ( 56) 00:08:01.798 7158.548 - 7208.960: 23.6789% ( 61) 00:08:01.798 7208.960 - 7259.372: 24.0410% ( 54) 00:08:01.798 7259.372 - 7309.785: 24.3696% ( 49) 00:08:01.798 7309.785 - 7360.197: 24.7251% ( 53) 00:08:01.798 7360.197 - 7410.609: 25.1945% ( 70) 00:08:01.798 7410.609 - 7461.022: 25.6304% ( 65) 00:08:01.798 7461.022 - 7511.434: 26.0528% ( 63) 00:08:01.798 7511.434 - 7561.846: 26.5424% ( 73) 00:08:01.798 7561.846 - 7612.258: 27.0587% ( 77) 00:08:01.798 7612.258 - 7662.671: 27.6757% ( 92) 00:08:01.798 7662.671 - 7713.083: 28.2792% ( 90) 00:08:01.798 7713.083 - 7763.495: 29.0705% ( 118) 00:08:01.798 7763.495 - 7813.908: 30.0429% ( 145) 00:08:01.798 7813.908 - 7864.320: 31.1427% ( 164) 00:08:01.798 7864.320 - 7914.732: 32.2626% ( 167) 00:08:01.798 7914.732 - 7965.145: 33.5770% ( 196) 00:08:01.798 7965.145 - 8015.557: 35.1462% ( 234) 00:08:01.798 8015.557 - 8065.969: 36.8830% ( 259) 00:08:01.798 8065.969 - 8116.382: 38.9016% ( 301) 00:08:01.798 8116.382 - 8166.794: 40.8463% ( 290) 00:08:01.798 8166.794 - 8217.206: 42.9185% ( 309) 00:08:01.798 8217.206 - 8267.618: 45.2119% ( 342) 00:08:01.798 8267.618 - 8318.031: 47.5389% ( 347) 00:08:01.798 8318.031 - 8368.443: 50.0671% ( 377) 00:08:01.798 8368.443 - 8418.855: 52.7428% ( 399) 00:08:01.798 8418.855 - 8469.268: 55.4050% ( 397) 00:08:01.798 8469.268 - 8519.680: 58.1076% ( 403) 00:08:01.798 8519.680 - 8570.092: 60.7028% ( 387) 00:08:01.798 8570.092 - 8620.505: 63.1371% ( 363) 00:08:01.798 8620.505 - 8670.917: 65.5177% ( 355) 00:08:01.798 8670.917 - 8721.329: 67.8246% ( 344) 00:08:01.798 8721.329 - 8771.742: 70.0644% ( 334) 00:08:01.798 8771.742 - 8822.154: 72.1768% ( 315) 00:08:01.798 8822.154 - 8872.566: 74.0813% ( 284) 00:08:01.798 8872.566 - 8922.978: 75.8450% ( 263) 00:08:01.798 8922.978 - 8973.391: 77.3806% ( 229) 00:08:01.798 8973.391 - 9023.803: 78.8090% ( 213) 00:08:01.798 9023.803 - 9074.215: 79.9624% ( 172) 00:08:01.798 9074.215 - 9124.628: 80.9214% ( 143) 00:08:01.798 9124.628 - 9175.040: 81.8670% ( 141) 00:08:01.798 9175.040 - 9225.452: 82.6583% ( 118) 00:08:01.798 9225.452 - 9275.865: 83.3959% ( 110) 00:08:01.798 9275.865 - 9326.277: 83.9928% ( 89) 00:08:01.798 9326.277 - 9376.689: 84.5695% ( 86) 00:08:01.798 9376.689 - 9427.102: 85.0255% ( 68) 00:08:01.798 9427.102 - 9477.514: 85.4748% ( 67) 00:08:01.798 9477.514 - 9527.926: 85.9107% ( 65) 00:08:01.798 9527.926 - 9578.338: 86.3130% ( 60) 00:08:01.798 9578.338 - 9628.751: 86.6886% ( 56) 00:08:01.798 9628.751 - 9679.163: 86.9970% ( 46) 00:08:01.798 9679.163 - 9729.575: 87.3256% ( 49) 00:08:01.798 9729.575 - 9779.988: 87.6878% ( 54) 00:08:01.798 9779.988 - 9830.400: 87.9962% ( 46) 00:08:01.798 9830.400 - 9880.812: 88.3248% ( 49) 00:08:01.798 9880.812 - 9931.225: 88.6333% ( 46) 00:08:01.798 9931.225 - 9981.637: 88.9083% ( 41) 00:08:01.798 9981.637 - 10032.049: 89.1899% ( 42) 00:08:01.798 10032.049 - 10082.462: 89.4649% ( 41) 00:08:01.798 10082.462 - 10132.874: 89.7599% ( 44) 00:08:01.798 10132.874 - 10183.286: 90.0751% ( 47) 00:08:01.799 10183.286 - 10233.698: 90.3232% ( 37) 00:08:01.799 10233.698 - 10284.111: 90.5714% ( 37) 00:08:01.799 10284.111 - 10334.523: 90.8061% ( 35) 00:08:01.799 10334.523 - 10384.935: 91.0139% ( 31) 00:08:01.799 10384.935 - 10435.348: 91.2151% ( 30) 00:08:01.799 10435.348 - 10485.760: 91.4230% ( 31) 00:08:01.799 10485.760 - 10536.172: 91.6108% ( 28) 00:08:01.799 10536.172 - 10586.585: 91.7315% ( 18) 00:08:01.799 10586.585 - 10636.997: 91.8790% ( 22) 00:08:01.799 10636.997 - 10687.409: 92.0198% ( 21) 00:08:01.799 10687.409 - 10737.822: 92.1741% ( 23) 00:08:01.799 10737.822 - 10788.234: 92.3216% ( 22) 00:08:01.799 10788.234 - 10838.646: 92.5027% ( 27) 00:08:01.799 10838.646 - 10889.058: 92.6368% ( 20) 00:08:01.799 10889.058 - 10939.471: 92.7843% ( 22) 00:08:01.799 10939.471 - 10989.883: 92.9185% ( 20) 00:08:01.799 10989.883 - 11040.295: 93.0392% ( 18) 00:08:01.799 11040.295 - 11090.708: 93.1733% ( 20) 00:08:01.799 11090.708 - 11141.120: 93.3141% ( 21) 00:08:01.799 11141.120 - 11191.532: 93.4549% ( 21) 00:08:01.799 11191.532 - 11241.945: 93.5756% ( 18) 00:08:01.799 11241.945 - 11292.357: 93.7098% ( 20) 00:08:01.799 11292.357 - 11342.769: 93.7902% ( 12) 00:08:01.799 11342.769 - 11393.182: 93.8841% ( 14) 00:08:01.799 11393.182 - 11443.594: 94.0719% ( 28) 00:08:01.799 11443.594 - 11494.006: 94.2530% ( 27) 00:08:01.799 11494.006 - 11544.418: 94.4139% ( 24) 00:08:01.799 11544.418 - 11594.831: 94.5681% ( 23) 00:08:01.799 11594.831 - 11645.243: 94.7023% ( 20) 00:08:01.799 11645.243 - 11695.655: 94.8699% ( 25) 00:08:01.799 11695.655 - 11746.068: 95.0174% ( 22) 00:08:01.799 11746.068 - 11796.480: 95.1516% ( 20) 00:08:01.799 11796.480 - 11846.892: 95.2924% ( 21) 00:08:01.799 11846.892 - 11897.305: 95.4131% ( 18) 00:08:01.799 11897.305 - 11947.717: 95.5405% ( 19) 00:08:01.799 11947.717 - 11998.129: 95.6612% ( 18) 00:08:01.799 11998.129 - 12048.542: 95.7685% ( 16) 00:08:01.799 12048.542 - 12098.954: 95.8758% ( 16) 00:08:01.799 12098.954 - 12149.366: 96.0099% ( 20) 00:08:01.799 12149.366 - 12199.778: 96.1373% ( 19) 00:08:01.799 12199.778 - 12250.191: 96.2782% ( 21) 00:08:01.799 12250.191 - 12300.603: 96.4458% ( 25) 00:08:01.799 12300.603 - 12351.015: 96.5464% ( 15) 00:08:01.799 12351.015 - 12401.428: 96.6604% ( 17) 00:08:01.799 12401.428 - 12451.840: 96.7610% ( 15) 00:08:01.799 12451.840 - 12502.252: 96.8750% ( 17) 00:08:01.799 12502.252 - 12552.665: 96.9957% ( 18) 00:08:01.799 12552.665 - 12603.077: 97.1365% ( 21) 00:08:01.799 12603.077 - 12653.489: 97.2505% ( 17) 00:08:01.799 12653.489 - 12703.902: 97.3712% ( 18) 00:08:01.799 12703.902 - 12754.314: 97.5322% ( 24) 00:08:01.799 12754.314 - 12804.726: 97.6596% ( 19) 00:08:01.799 12804.726 - 12855.138: 97.7535% ( 14) 00:08:01.799 12855.138 - 12905.551: 97.8340% ( 12) 00:08:01.799 12905.551 - 13006.375: 98.0016% ( 25) 00:08:01.799 13006.375 - 13107.200: 98.0620% ( 9) 00:08:01.799 13107.200 - 13208.025: 98.1089% ( 7) 00:08:01.799 13208.025 - 13308.849: 98.1558% ( 7) 00:08:01.799 13308.849 - 13409.674: 98.2028% ( 7) 00:08:01.799 13409.674 - 13510.498: 98.2564% ( 8) 00:08:01.799 13510.498 - 13611.323: 98.3235% ( 10) 00:08:01.799 13611.323 - 13712.148: 98.3570% ( 5) 00:08:01.799 13712.148 - 13812.972: 98.3771% ( 3) 00:08:01.799 13812.972 - 13913.797: 98.4040% ( 4) 00:08:01.799 13913.797 - 14014.622: 98.4308% ( 4) 00:08:01.799 14014.622 - 14115.446: 98.4576% ( 4) 00:08:01.799 14115.446 - 14216.271: 98.4911% ( 5) 00:08:01.799 14216.271 - 14317.095: 98.5180% ( 4) 00:08:01.799 14317.095 - 14417.920: 98.5448% ( 4) 00:08:01.799 14417.920 - 14518.745: 98.5716% ( 4) 00:08:01.799 14518.745 - 14619.569: 98.5984% ( 4) 00:08:01.799 14619.569 - 14720.394: 98.6186% ( 3) 00:08:01.799 14720.394 - 14821.218: 98.6454% ( 4) 00:08:01.799 14821.218 - 14922.043: 98.6722% ( 4) 00:08:01.799 14922.043 - 15022.868: 98.7057% ( 5) 00:08:01.799 15022.868 - 15123.692: 98.7124% ( 1) 00:08:01.799 15325.342 - 15426.166: 98.7393% ( 4) 00:08:01.799 15426.166 - 15526.991: 98.8197% ( 12) 00:08:01.799 15526.991 - 15627.815: 98.8600% ( 6) 00:08:01.799 15627.815 - 15728.640: 98.9069% ( 7) 00:08:01.799 15728.640 - 15829.465: 98.9606% ( 8) 00:08:01.799 15829.465 - 15930.289: 99.0075% ( 7) 00:08:01.799 15930.289 - 16031.114: 99.0545% ( 7) 00:08:01.799 16031.114 - 16131.938: 99.1081% ( 8) 00:08:01.799 16131.938 - 16232.763: 99.1416% ( 5) 00:08:01.799 28835.840 - 29037.489: 99.1886% ( 7) 00:08:01.799 29037.489 - 29239.138: 99.2422% ( 8) 00:08:01.799 29239.138 - 29440.788: 99.3026% ( 9) 00:08:01.799 29440.788 - 29642.437: 99.3562% ( 8) 00:08:01.799 29642.437 - 29844.086: 99.4099% ( 8) 00:08:01.799 29844.086 - 30045.735: 99.4635% ( 8) 00:08:01.799 30045.735 - 30247.385: 99.5172% ( 8) 00:08:01.799 30247.385 - 30449.034: 99.5708% ( 8) 00:08:01.799 33675.422 - 33877.071: 99.6178% ( 7) 00:08:01.799 33877.071 - 34078.720: 99.6714% ( 8) 00:08:01.799 34078.720 - 34280.369: 99.7251% ( 8) 00:08:01.799 34280.369 - 34482.018: 99.7787% ( 8) 00:08:01.799 34482.018 - 34683.668: 99.8391% ( 9) 00:08:01.799 34683.668 - 34885.317: 99.8927% ( 8) 00:08:01.799 34885.317 - 35086.966: 99.9531% ( 9) 00:08:01.799 35086.966 - 35288.615: 100.0000% ( 7) 00:08:01.799 00:08:01.799 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:01.799 ============================================================================== 00:08:01.799 Range in us Cumulative IO count 00:08:01.799 5948.652 - 5973.858: 0.0067% ( 1) 00:08:01.799 5973.858 - 5999.065: 0.0201% ( 2) 00:08:01.799 5999.065 - 6024.271: 0.0671% ( 7) 00:08:01.799 6024.271 - 6049.477: 0.1542% ( 13) 00:08:01.799 6049.477 - 6074.683: 0.3487% ( 29) 00:08:01.799 6074.683 - 6099.889: 0.5097% ( 24) 00:08:01.799 6099.889 - 6125.095: 0.7779% ( 40) 00:08:01.799 6125.095 - 6150.302: 1.0998% ( 48) 00:08:01.799 6150.302 - 6175.508: 1.5156% ( 62) 00:08:01.799 6175.508 - 6200.714: 2.0722% ( 83) 00:08:01.799 6200.714 - 6225.920: 2.5818% ( 76) 00:08:01.799 6225.920 - 6251.126: 3.0579% ( 71) 00:08:01.799 6251.126 - 6276.332: 3.6212% ( 84) 00:08:01.799 6276.332 - 6301.538: 4.2248% ( 90) 00:08:01.799 6301.538 - 6326.745: 4.8015% ( 86) 00:08:01.799 6326.745 - 6351.951: 5.3514% ( 82) 00:08:01.799 6351.951 - 6377.157: 5.9281% ( 86) 00:08:01.799 6377.157 - 6402.363: 6.5518% ( 93) 00:08:01.799 6402.363 - 6427.569: 7.1419% ( 88) 00:08:01.799 6427.569 - 6452.775: 7.7790% ( 95) 00:08:01.799 6452.775 - 6503.188: 9.0531% ( 190) 00:08:01.799 6503.188 - 6553.600: 10.3809% ( 198) 00:08:01.799 6553.600 - 6604.012: 11.7020% ( 197) 00:08:01.799 6604.012 - 6654.425: 13.0097% ( 195) 00:08:01.799 6654.425 - 6704.837: 14.3777% ( 204) 00:08:01.799 6704.837 - 6755.249: 15.6988% ( 197) 00:08:01.799 6755.249 - 6805.662: 17.0467% ( 201) 00:08:01.799 6805.662 - 6856.074: 18.3007% ( 187) 00:08:01.799 6856.074 - 6906.486: 19.5480% ( 186) 00:08:01.799 6906.486 - 6956.898: 20.5606% ( 151) 00:08:01.799 6956.898 - 7007.311: 21.3855% ( 123) 00:08:01.799 7007.311 - 7057.723: 21.9957% ( 91) 00:08:01.799 7057.723 - 7108.135: 22.4987% ( 75) 00:08:01.799 7108.135 - 7158.548: 22.8943% ( 59) 00:08:01.799 7158.548 - 7208.960: 23.4040% ( 76) 00:08:01.799 7208.960 - 7259.372: 23.8600% ( 68) 00:08:01.799 7259.372 - 7309.785: 24.2422% ( 57) 00:08:01.799 7309.785 - 7360.197: 24.5775% ( 50) 00:08:01.799 7360.197 - 7410.609: 24.9531% ( 56) 00:08:01.799 7410.609 - 7461.022: 25.4493% ( 74) 00:08:01.799 7461.022 - 7511.434: 25.8986% ( 67) 00:08:01.799 7511.434 - 7561.846: 26.3211% ( 63) 00:08:01.799 7561.846 - 7612.258: 26.8307% ( 76) 00:08:01.799 7612.258 - 7662.671: 27.3806% ( 82) 00:08:01.799 7662.671 - 7713.083: 27.9104% ( 79) 00:08:01.799 7713.083 - 7763.495: 28.6347% ( 108) 00:08:01.799 7763.495 - 7813.908: 29.6473% ( 151) 00:08:01.799 7813.908 - 7864.320: 30.6800% ( 154) 00:08:01.799 7864.320 - 7914.732: 31.8267% ( 171) 00:08:01.799 7914.732 - 7965.145: 33.2082% ( 206) 00:08:01.799 7965.145 - 8015.557: 34.6499% ( 215) 00:08:01.799 8015.557 - 8065.969: 36.1789% ( 228) 00:08:01.799 8065.969 - 8116.382: 38.1907% ( 300) 00:08:01.799 8116.382 - 8166.794: 40.2562% ( 308) 00:08:01.799 8166.794 - 8217.206: 42.6770% ( 361) 00:08:01.799 8217.206 - 8267.618: 44.9705% ( 342) 00:08:01.799 8267.618 - 8318.031: 47.4517% ( 370) 00:08:01.799 8318.031 - 8368.443: 50.0805% ( 392) 00:08:01.799 8368.443 - 8418.855: 52.7763% ( 402) 00:08:01.799 8418.855 - 8469.268: 55.5392% ( 412) 00:08:01.799 8469.268 - 8519.680: 58.1947% ( 396) 00:08:01.799 8519.680 - 8570.092: 60.8436% ( 395) 00:08:01.799 8570.092 - 8620.505: 63.4724% ( 392) 00:08:01.799 8620.505 - 8670.917: 66.0676% ( 387) 00:08:01.799 8670.917 - 8721.329: 68.3611% ( 342) 00:08:01.799 8721.329 - 8771.742: 70.5137% ( 321) 00:08:01.800 8771.742 - 8822.154: 72.5590% ( 305) 00:08:01.800 8822.154 - 8872.566: 74.4635% ( 284) 00:08:01.800 8872.566 - 8922.978: 76.2339% ( 264) 00:08:01.800 8922.978 - 8973.391: 77.7964% ( 233) 00:08:01.800 8973.391 - 9023.803: 79.1711% ( 205) 00:08:01.800 9023.803 - 9074.215: 80.3514% ( 176) 00:08:01.800 9074.215 - 9124.628: 81.3104% ( 143) 00:08:01.800 9124.628 - 9175.040: 82.1352% ( 123) 00:08:01.800 9175.040 - 9225.452: 82.8259% ( 103) 00:08:01.800 9225.452 - 9275.865: 83.4496% ( 93) 00:08:01.800 9275.865 - 9326.277: 84.0062% ( 83) 00:08:01.800 9326.277 - 9376.689: 84.4756% ( 70) 00:08:01.800 9376.689 - 9427.102: 85.0456% ( 85) 00:08:01.800 9427.102 - 9477.514: 85.4345% ( 58) 00:08:01.800 9477.514 - 9527.926: 85.7967% ( 54) 00:08:01.800 9527.926 - 9578.338: 86.1320% ( 50) 00:08:01.800 9578.338 - 9628.751: 86.4673% ( 50) 00:08:01.800 9628.751 - 9679.163: 86.7959% ( 49) 00:08:01.800 9679.163 - 9729.575: 87.1446% ( 52) 00:08:01.800 9729.575 - 9779.988: 87.4732% ( 49) 00:08:01.800 9779.988 - 9830.400: 87.8286% ( 53) 00:08:01.800 9830.400 - 9880.812: 88.1505% ( 48) 00:08:01.800 9880.812 - 9931.225: 88.4925% ( 51) 00:08:01.800 9931.225 - 9981.637: 88.7943% ( 45) 00:08:01.800 9981.637 - 10032.049: 89.0960% ( 45) 00:08:01.800 10032.049 - 10082.462: 89.4447% ( 52) 00:08:01.800 10082.462 - 10132.874: 89.7264% ( 42) 00:08:01.800 10132.874 - 10183.286: 89.9946% ( 40) 00:08:01.800 10183.286 - 10233.698: 90.3366% ( 51) 00:08:01.800 10233.698 - 10284.111: 90.6652% ( 49) 00:08:01.800 10284.111 - 10334.523: 90.9536% ( 43) 00:08:01.800 10334.523 - 10384.935: 91.2285% ( 41) 00:08:01.800 10384.935 - 10435.348: 91.4834% ( 38) 00:08:01.800 10435.348 - 10485.760: 91.7181% ( 35) 00:08:01.800 10485.760 - 10536.172: 91.9394% ( 33) 00:08:01.800 10536.172 - 10586.585: 92.1607% ( 33) 00:08:01.800 10586.585 - 10636.997: 92.3753% ( 32) 00:08:01.800 10636.997 - 10687.409: 92.5161% ( 21) 00:08:01.800 10687.409 - 10737.822: 92.6636% ( 22) 00:08:01.800 10737.822 - 10788.234: 92.8045% ( 21) 00:08:01.800 10788.234 - 10838.646: 92.9453% ( 21) 00:08:01.800 10838.646 - 10889.058: 93.1062% ( 24) 00:08:01.800 10889.058 - 10939.471: 93.2336% ( 19) 00:08:01.800 10939.471 - 10989.883: 93.3409% ( 16) 00:08:01.800 10989.883 - 11040.295: 93.4415% ( 15) 00:08:01.800 11040.295 - 11090.708: 93.5421% ( 15) 00:08:01.800 11090.708 - 11141.120: 93.6025% ( 9) 00:08:01.800 11141.120 - 11191.532: 93.6628% ( 9) 00:08:01.800 11191.532 - 11241.945: 93.7098% ( 7) 00:08:01.800 11241.945 - 11292.357: 93.7567% ( 7) 00:08:01.800 11292.357 - 11342.769: 93.8305% ( 11) 00:08:01.800 11342.769 - 11393.182: 93.8841% ( 8) 00:08:01.800 11393.182 - 11443.594: 93.9378% ( 8) 00:08:01.800 11443.594 - 11494.006: 94.0115% ( 11) 00:08:01.800 11494.006 - 11544.418: 94.1255% ( 17) 00:08:01.800 11544.418 - 11594.831: 94.2530% ( 19) 00:08:01.800 11594.831 - 11645.243: 94.3737% ( 18) 00:08:01.800 11645.243 - 11695.655: 94.4877% ( 17) 00:08:01.800 11695.655 - 11746.068: 94.5883% ( 15) 00:08:01.800 11746.068 - 11796.480: 94.7090% ( 18) 00:08:01.800 11796.480 - 11846.892: 94.8297% ( 18) 00:08:01.800 11846.892 - 11897.305: 94.9504% ( 18) 00:08:01.800 11897.305 - 11947.717: 95.0979% ( 22) 00:08:01.800 11947.717 - 11998.129: 95.2589% ( 24) 00:08:01.800 11998.129 - 12048.542: 95.4265% ( 25) 00:08:01.800 12048.542 - 12098.954: 95.5942% ( 25) 00:08:01.800 12098.954 - 12149.366: 95.7484% ( 23) 00:08:01.800 12149.366 - 12199.778: 95.9093% ( 24) 00:08:01.800 12199.778 - 12250.191: 96.0703% ( 24) 00:08:01.800 12250.191 - 12300.603: 96.2446% ( 26) 00:08:01.800 12300.603 - 12351.015: 96.3989% ( 23) 00:08:01.800 12351.015 - 12401.428: 96.5330% ( 20) 00:08:01.800 12401.428 - 12451.840: 96.6403% ( 16) 00:08:01.800 12451.840 - 12502.252: 96.7543% ( 17) 00:08:01.800 12502.252 - 12552.665: 96.8549% ( 15) 00:08:01.800 12552.665 - 12603.077: 96.9488% ( 14) 00:08:01.800 12603.077 - 12653.489: 97.0427% ( 14) 00:08:01.800 12653.489 - 12703.902: 97.1030% ( 9) 00:08:01.800 12703.902 - 12754.314: 97.1768% ( 11) 00:08:01.800 12754.314 - 12804.726: 97.2841% ( 16) 00:08:01.800 12804.726 - 12855.138: 97.3914% ( 16) 00:08:01.800 12855.138 - 12905.551: 97.4987% ( 16) 00:08:01.800 12905.551 - 13006.375: 97.6730% ( 26) 00:08:01.800 13006.375 - 13107.200: 97.8205% ( 22) 00:08:01.800 13107.200 - 13208.025: 97.9480% ( 19) 00:08:01.800 13208.025 - 13308.849: 98.1022% ( 23) 00:08:01.800 13308.849 - 13409.674: 98.2296% ( 19) 00:08:01.800 13409.674 - 13510.498: 98.3436% ( 17) 00:08:01.800 13510.498 - 13611.323: 98.4576% ( 17) 00:08:01.800 13611.323 - 13712.148: 98.5314% ( 11) 00:08:01.800 13712.148 - 13812.972: 98.5716% ( 6) 00:08:01.800 13812.972 - 13913.797: 98.5984% ( 4) 00:08:01.800 13913.797 - 14014.622: 98.6253% ( 4) 00:08:01.800 14014.622 - 14115.446: 98.6521% ( 4) 00:08:01.800 14115.446 - 14216.271: 98.6789% ( 4) 00:08:01.800 14216.271 - 14317.095: 98.7124% ( 5) 00:08:01.800 14317.095 - 14417.920: 98.7594% ( 7) 00:08:01.800 14417.920 - 14518.745: 98.7795% ( 3) 00:08:01.800 14518.745 - 14619.569: 98.8063% ( 4) 00:08:01.800 14619.569 - 14720.394: 98.8332% ( 4) 00:08:01.800 14720.394 - 14821.218: 98.8600% ( 4) 00:08:01.800 14821.218 - 14922.043: 98.8868% ( 4) 00:08:01.800 14922.043 - 15022.868: 98.9069% ( 3) 00:08:01.800 15022.868 - 15123.692: 98.9337% ( 4) 00:08:01.800 15123.692 - 15224.517: 98.9606% ( 4) 00:08:01.800 15224.517 - 15325.342: 98.9874% ( 4) 00:08:01.800 15325.342 - 15426.166: 99.0142% ( 4) 00:08:01.800 15426.166 - 15526.991: 99.0276% ( 2) 00:08:01.800 15526.991 - 15627.815: 99.0545% ( 4) 00:08:01.800 15627.815 - 15728.640: 99.0813% ( 4) 00:08:01.800 15728.640 - 15829.465: 99.1081% ( 4) 00:08:01.800 15829.465 - 15930.289: 99.1282% ( 3) 00:08:01.800 15930.289 - 16031.114: 99.1416% ( 2) 00:08:01.800 27222.646 - 27424.295: 99.1685% ( 4) 00:08:01.800 27424.295 - 27625.945: 99.2221% ( 8) 00:08:01.800 27625.945 - 27827.594: 99.2758% ( 8) 00:08:01.800 27827.594 - 28029.243: 99.3361% ( 9) 00:08:01.800 28029.243 - 28230.892: 99.3898% ( 8) 00:08:01.800 28230.892 - 28432.542: 99.4501% ( 9) 00:08:01.800 28432.542 - 28634.191: 99.5038% ( 8) 00:08:01.800 28634.191 - 28835.840: 99.5574% ( 8) 00:08:01.800 28835.840 - 29037.489: 99.5708% ( 2) 00:08:01.800 31860.578 - 32062.228: 99.5976% ( 4) 00:08:01.800 32062.228 - 32263.877: 99.6513% ( 8) 00:08:01.800 32263.877 - 32465.526: 99.7116% ( 9) 00:08:01.800 32465.526 - 32667.175: 99.7653% ( 8) 00:08:01.800 32667.175 - 32868.825: 99.8256% ( 9) 00:08:01.800 32868.825 - 33070.474: 99.8726% ( 7) 00:08:01.800 33070.474 - 33272.123: 99.9329% ( 9) 00:08:01.800 33272.123 - 33473.772: 99.9866% ( 8) 00:08:01.800 33473.772 - 33675.422: 100.0000% ( 2) 00:08:01.800 00:08:01.800 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:01.800 ============================================================================== 00:08:01.800 Range in us Cumulative IO count 00:08:01.800 5999.065 - 6024.271: 0.0738% ( 11) 00:08:01.800 6024.271 - 6049.477: 0.1475% ( 11) 00:08:01.800 6049.477 - 6074.683: 0.2749% ( 19) 00:08:01.800 6074.683 - 6099.889: 0.4359% ( 24) 00:08:01.800 6099.889 - 6125.095: 0.8248% ( 58) 00:08:01.800 6125.095 - 6150.302: 1.1937% ( 55) 00:08:01.800 6150.302 - 6175.508: 1.5893% ( 59) 00:08:01.800 6175.508 - 6200.714: 2.1191% ( 79) 00:08:01.800 6200.714 - 6225.920: 2.6086% ( 73) 00:08:01.800 6225.920 - 6251.126: 3.1585% ( 82) 00:08:01.800 6251.126 - 6276.332: 3.8761% ( 107) 00:08:01.800 6276.332 - 6301.538: 4.3522% ( 71) 00:08:01.800 6301.538 - 6326.745: 4.9356% ( 87) 00:08:01.800 6326.745 - 6351.951: 5.4855% ( 82) 00:08:01.800 6351.951 - 6377.157: 6.0689% ( 87) 00:08:01.800 6377.157 - 6402.363: 6.5719% ( 75) 00:08:01.800 6402.363 - 6427.569: 7.1754% ( 90) 00:08:01.800 6427.569 - 6452.775: 7.7454% ( 85) 00:08:01.800 6452.775 - 6503.188: 8.9592% ( 181) 00:08:01.800 6503.188 - 6553.600: 10.2267% ( 189) 00:08:01.800 6553.600 - 6604.012: 11.6148% ( 207) 00:08:01.800 6604.012 - 6654.425: 13.1237% ( 225) 00:08:01.800 6654.425 - 6704.837: 14.4649% ( 200) 00:08:01.800 6704.837 - 6755.249: 15.7055% ( 185) 00:08:01.800 6755.249 - 6805.662: 17.0802% ( 205) 00:08:01.800 6805.662 - 6856.074: 18.3409% ( 188) 00:08:01.800 6856.074 - 6906.486: 19.5212% ( 176) 00:08:01.800 6906.486 - 6956.898: 20.5740% ( 157) 00:08:01.800 6956.898 - 7007.311: 21.4257% ( 127) 00:08:01.800 7007.311 - 7057.723: 21.9957% ( 85) 00:08:01.801 7057.723 - 7108.135: 22.5054% ( 76) 00:08:01.801 7108.135 - 7158.548: 22.9278% ( 63) 00:08:01.801 7158.548 - 7208.960: 23.3436% ( 62) 00:08:01.801 7208.960 - 7259.372: 23.7929% ( 67) 00:08:01.801 7259.372 - 7309.785: 24.1685% ( 56) 00:08:01.801 7309.785 - 7360.197: 24.5507% ( 57) 00:08:01.801 7360.197 - 7410.609: 24.9396% ( 58) 00:08:01.801 7410.609 - 7461.022: 25.3219% ( 57) 00:08:01.801 7461.022 - 7511.434: 25.6840% ( 54) 00:08:01.801 7511.434 - 7561.846: 26.0864% ( 60) 00:08:01.801 7561.846 - 7612.258: 26.4820% ( 59) 00:08:01.801 7612.258 - 7662.671: 26.9045% ( 63) 00:08:01.801 7662.671 - 7713.083: 27.4477% ( 81) 00:08:01.801 7713.083 - 7763.495: 28.1049% ( 98) 00:08:01.801 7763.495 - 7813.908: 28.9096% ( 120) 00:08:01.801 7813.908 - 7864.320: 29.8082% ( 134) 00:08:01.801 7864.320 - 7914.732: 30.9482% ( 170) 00:08:01.801 7914.732 - 7965.145: 32.2425% ( 193) 00:08:01.801 7965.145 - 8015.557: 33.6575% ( 211) 00:08:01.801 8015.557 - 8065.969: 35.3273% ( 249) 00:08:01.801 8065.969 - 8116.382: 37.2653% ( 289) 00:08:01.801 8116.382 - 8166.794: 39.5386% ( 339) 00:08:01.801 8166.794 - 8217.206: 41.8924% ( 351) 00:08:01.801 8217.206 - 8267.618: 44.2530% ( 352) 00:08:01.801 8267.618 - 8318.031: 46.8683% ( 390) 00:08:01.801 8318.031 - 8368.443: 49.4501% ( 385) 00:08:01.801 8368.443 - 8418.855: 52.2532% ( 418) 00:08:01.801 8418.855 - 8469.268: 54.9893% ( 408) 00:08:01.801 8469.268 - 8519.680: 57.8661% ( 429) 00:08:01.801 8519.680 - 8570.092: 60.7028% ( 423) 00:08:01.801 8570.092 - 8620.505: 63.4455% ( 409) 00:08:01.801 8620.505 - 8670.917: 65.9737% ( 377) 00:08:01.801 8670.917 - 8721.329: 68.3812% ( 359) 00:08:01.801 8721.329 - 8771.742: 70.7216% ( 349) 00:08:01.801 8771.742 - 8822.154: 72.8809% ( 322) 00:08:01.801 8822.154 - 8872.566: 74.7720% ( 282) 00:08:01.801 8872.566 - 8922.978: 76.4552% ( 251) 00:08:01.801 8922.978 - 8973.391: 77.9842% ( 228) 00:08:01.801 8973.391 - 9023.803: 79.3120% ( 198) 00:08:01.801 9023.803 - 9074.215: 80.4922% ( 176) 00:08:01.801 9074.215 - 9124.628: 81.4177% ( 138) 00:08:01.801 9124.628 - 9175.040: 82.2559% ( 125) 00:08:01.801 9175.040 - 9225.452: 83.0942% ( 125) 00:08:01.801 9225.452 - 9275.865: 83.8251% ( 109) 00:08:01.801 9275.865 - 9326.277: 84.5024% ( 101) 00:08:01.801 9326.277 - 9376.689: 85.0657% ( 84) 00:08:01.801 9376.689 - 9427.102: 85.5150% ( 67) 00:08:01.801 9427.102 - 9477.514: 85.9241% ( 61) 00:08:01.801 9477.514 - 9527.926: 86.2728% ( 52) 00:08:01.801 9527.926 - 9578.338: 86.6014% ( 49) 00:08:01.801 9578.338 - 9628.751: 86.8696% ( 40) 00:08:01.801 9628.751 - 9679.163: 87.1513% ( 42) 00:08:01.801 9679.163 - 9729.575: 87.3994% ( 37) 00:08:01.801 9729.575 - 9779.988: 87.6811% ( 42) 00:08:01.801 9779.988 - 9830.400: 87.8822% ( 30) 00:08:01.801 9830.400 - 9880.812: 88.1237% ( 36) 00:08:01.801 9880.812 - 9931.225: 88.3584% ( 35) 00:08:01.801 9931.225 - 9981.637: 88.5797% ( 33) 00:08:01.801 9981.637 - 10032.049: 88.8278% ( 37) 00:08:01.801 10032.049 - 10082.462: 89.1094% ( 42) 00:08:01.801 10082.462 - 10132.874: 89.5252% ( 62) 00:08:01.801 10132.874 - 10183.286: 89.8136% ( 43) 00:08:01.801 10183.286 - 10233.698: 90.0885% ( 41) 00:08:01.801 10233.698 - 10284.111: 90.3635% ( 41) 00:08:01.801 10284.111 - 10334.523: 90.6384% ( 41) 00:08:01.801 10334.523 - 10384.935: 90.9134% ( 41) 00:08:01.801 10384.935 - 10435.348: 91.1816% ( 40) 00:08:01.801 10435.348 - 10485.760: 91.4767% ( 44) 00:08:01.801 10485.760 - 10536.172: 91.7315% ( 38) 00:08:01.801 10536.172 - 10586.585: 92.0601% ( 49) 00:08:01.801 10586.585 - 10636.997: 92.3216% ( 39) 00:08:01.801 10636.997 - 10687.409: 92.5697% ( 37) 00:08:01.801 10687.409 - 10737.822: 92.7843% ( 32) 00:08:01.801 10737.822 - 10788.234: 92.9855% ( 30) 00:08:01.801 10788.234 - 10838.646: 93.1733% ( 28) 00:08:01.801 10838.646 - 10889.058: 93.3611% ( 28) 00:08:01.801 10889.058 - 10939.471: 93.5086% ( 22) 00:08:01.801 10939.471 - 10989.883: 93.6293% ( 18) 00:08:01.801 10989.883 - 11040.295: 93.7500% ( 18) 00:08:01.801 11040.295 - 11090.708: 93.8238% ( 11) 00:08:01.801 11090.708 - 11141.120: 93.9244% ( 15) 00:08:01.801 11141.120 - 11191.532: 94.0182% ( 14) 00:08:01.801 11191.532 - 11241.945: 94.1054% ( 13) 00:08:01.801 11241.945 - 11292.357: 94.1926% ( 13) 00:08:01.801 11292.357 - 11342.769: 94.2798% ( 13) 00:08:01.801 11342.769 - 11393.182: 94.3468% ( 10) 00:08:01.801 11393.182 - 11443.594: 94.4340% ( 13) 00:08:01.801 11443.594 - 11494.006: 94.4944% ( 9) 00:08:01.801 11494.006 - 11544.418: 94.5681% ( 11) 00:08:01.801 11544.418 - 11594.831: 94.6754% ( 16) 00:08:01.801 11594.831 - 11645.243: 94.7760% ( 15) 00:08:01.801 11645.243 - 11695.655: 94.8632% ( 13) 00:08:01.801 11695.655 - 11746.068: 94.9638% ( 15) 00:08:01.801 11746.068 - 11796.480: 95.0577% ( 14) 00:08:01.801 11796.480 - 11846.892: 95.1448% ( 13) 00:08:01.801 11846.892 - 11897.305: 95.2320% ( 13) 00:08:01.801 11897.305 - 11947.717: 95.3192% ( 13) 00:08:01.801 11947.717 - 11998.129: 95.4064% ( 13) 00:08:01.801 11998.129 - 12048.542: 95.5204% ( 17) 00:08:01.801 12048.542 - 12098.954: 95.6210% ( 15) 00:08:01.801 12098.954 - 12149.366: 95.7283% ( 16) 00:08:01.801 12149.366 - 12199.778: 95.8624% ( 20) 00:08:01.801 12199.778 - 12250.191: 95.9764% ( 17) 00:08:01.801 12250.191 - 12300.603: 96.1105% ( 20) 00:08:01.801 12300.603 - 12351.015: 96.2044% ( 14) 00:08:01.801 12351.015 - 12401.428: 96.3117% ( 16) 00:08:01.801 12401.428 - 12451.840: 96.4123% ( 15) 00:08:01.801 12451.840 - 12502.252: 96.5062% ( 14) 00:08:01.801 12502.252 - 12552.665: 96.5933% ( 13) 00:08:01.801 12552.665 - 12603.077: 96.6738% ( 12) 00:08:01.801 12603.077 - 12653.489: 96.7476% ( 11) 00:08:01.801 12653.489 - 12703.902: 96.8012% ( 8) 00:08:01.801 12703.902 - 12754.314: 96.8683% ( 10) 00:08:01.801 12754.314 - 12804.726: 96.9354% ( 10) 00:08:01.801 12804.726 - 12855.138: 97.0024% ( 10) 00:08:01.801 12855.138 - 12905.551: 97.0695% ( 10) 00:08:01.801 12905.551 - 13006.375: 97.2036% ( 20) 00:08:01.801 13006.375 - 13107.200: 97.3176% ( 17) 00:08:01.801 13107.200 - 13208.025: 97.4920% ( 26) 00:08:01.801 13208.025 - 13308.849: 97.7065% ( 32) 00:08:01.801 13308.849 - 13409.674: 97.8809% ( 26) 00:08:01.801 13409.674 - 13510.498: 98.0486% ( 25) 00:08:01.801 13510.498 - 13611.323: 98.2900% ( 36) 00:08:01.801 13611.323 - 13712.148: 98.4442% ( 23) 00:08:01.801 13712.148 - 13812.972: 98.5917% ( 22) 00:08:01.801 13812.972 - 13913.797: 98.7259% ( 20) 00:08:01.801 13913.797 - 14014.622: 98.8533% ( 19) 00:08:01.801 14014.622 - 14115.446: 98.9069% ( 8) 00:08:01.801 14115.446 - 14216.271: 98.9337% ( 4) 00:08:01.801 14216.271 - 14317.095: 98.9472% ( 2) 00:08:01.801 14317.095 - 14417.920: 98.9673% ( 3) 00:08:01.801 14417.920 - 14518.745: 98.9941% ( 4) 00:08:01.801 14518.745 - 14619.569: 99.0209% ( 4) 00:08:01.801 14619.569 - 14720.394: 99.0477% ( 4) 00:08:01.801 14720.394 - 14821.218: 99.0679% ( 3) 00:08:01.801 14821.218 - 14922.043: 99.0947% ( 4) 00:08:01.801 14922.043 - 15022.868: 99.1215% ( 4) 00:08:01.801 15022.868 - 15123.692: 99.1416% ( 3) 00:08:01.801 25508.628 - 25609.452: 99.1550% ( 2) 00:08:01.801 25609.452 - 25710.277: 99.1819% ( 4) 00:08:01.801 25710.277 - 25811.102: 99.2087% ( 4) 00:08:01.801 25811.102 - 26012.751: 99.2623% ( 8) 00:08:01.801 26012.751 - 26214.400: 99.3227% ( 9) 00:08:01.801 26214.400 - 26416.049: 99.3763% ( 8) 00:08:01.801 26416.049 - 26617.698: 99.4300% ( 8) 00:08:01.801 26617.698 - 26819.348: 99.4903% ( 9) 00:08:01.801 26819.348 - 27020.997: 99.5440% ( 8) 00:08:01.801 27020.997 - 27222.646: 99.5708% ( 4) 00:08:01.801 30045.735 - 30247.385: 99.5842% ( 2) 00:08:01.801 30247.385 - 30449.034: 99.6379% ( 8) 00:08:01.801 30449.034 - 30650.683: 99.6915% ( 8) 00:08:01.801 30650.683 - 30852.332: 99.7519% ( 9) 00:08:01.801 30852.332 - 31053.982: 99.8055% ( 8) 00:08:01.801 31053.982 - 31255.631: 99.8592% ( 8) 00:08:01.801 31255.631 - 31457.280: 99.9195% ( 9) 00:08:01.801 31457.280 - 31658.929: 99.9732% ( 8) 00:08:01.801 31658.929 - 31860.578: 100.0000% ( 4) 00:08:01.801 00:08:01.801 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:01.801 ============================================================================== 00:08:01.801 Range in us Cumulative IO count 00:08:01.801 5973.858 - 5999.065: 0.0134% ( 2) 00:08:01.801 5999.065 - 6024.271: 0.0401% ( 4) 00:08:01.801 6024.271 - 6049.477: 0.1669% ( 19) 00:08:01.801 6049.477 - 6074.683: 0.3940% ( 34) 00:08:01.801 6074.683 - 6099.889: 0.6477% ( 38) 00:08:01.801 6099.889 - 6125.095: 0.8681% ( 33) 00:08:01.801 6125.095 - 6150.302: 1.1418% ( 41) 00:08:01.801 6150.302 - 6175.508: 1.5759% ( 65) 00:08:01.801 6175.508 - 6200.714: 2.1635% ( 88) 00:08:01.801 6200.714 - 6225.920: 2.5507% ( 58) 00:08:01.801 6225.920 - 6251.126: 3.0849% ( 80) 00:08:01.801 6251.126 - 6276.332: 3.6792% ( 89) 00:08:01.801 6276.332 - 6301.538: 4.2468% ( 85) 00:08:01.801 6301.538 - 6326.745: 4.8478% ( 90) 00:08:01.801 6326.745 - 6351.951: 5.4955% ( 97) 00:08:01.801 6351.951 - 6377.157: 6.1098% ( 92) 00:08:01.801 6377.157 - 6402.363: 6.6774% ( 85) 00:08:01.801 6402.363 - 6427.569: 7.2516% ( 86) 00:08:01.801 6427.569 - 6452.775: 7.8726% ( 93) 00:08:01.801 6452.775 - 6503.188: 9.1213% ( 187) 00:08:01.801 6503.188 - 6553.600: 10.3833% ( 189) 00:08:01.801 6553.600 - 6604.012: 11.6787% ( 194) 00:08:01.801 6604.012 - 6654.425: 12.9474% ( 190) 00:08:01.801 6654.425 - 6704.837: 14.2762% ( 199) 00:08:01.801 6704.837 - 6755.249: 15.5983% ( 198) 00:08:01.801 6755.249 - 6805.662: 17.0272% ( 214) 00:08:01.801 6805.662 - 6856.074: 18.3360% ( 196) 00:08:01.801 6856.074 - 6906.486: 19.4378% ( 165) 00:08:01.801 6906.486 - 6956.898: 20.4460% ( 151) 00:08:01.801 6956.898 - 7007.311: 21.3208% ( 131) 00:08:01.801 7007.311 - 7057.723: 21.9885% ( 100) 00:08:01.802 7057.723 - 7108.135: 22.4359% ( 67) 00:08:01.802 7108.135 - 7158.548: 22.9100% ( 71) 00:08:01.802 7158.548 - 7208.960: 23.3440% ( 65) 00:08:01.802 7208.960 - 7259.372: 23.7981% ( 68) 00:08:01.802 7259.372 - 7309.785: 24.1520% ( 53) 00:08:01.802 7309.785 - 7360.197: 24.5126% ( 54) 00:08:01.802 7360.197 - 7410.609: 24.8731% ( 54) 00:08:01.802 7410.609 - 7461.022: 25.1603% ( 43) 00:08:01.802 7461.022 - 7511.434: 25.5809% ( 63) 00:08:01.802 7511.434 - 7561.846: 25.9749% ( 59) 00:08:01.802 7561.846 - 7612.258: 26.3488% ( 56) 00:08:01.802 7612.258 - 7662.671: 26.7428% ( 59) 00:08:01.802 7662.671 - 7713.083: 27.1968% ( 68) 00:08:01.802 7713.083 - 7763.495: 27.7444% ( 82) 00:08:01.802 7763.495 - 7813.908: 28.5123% ( 115) 00:08:01.802 7813.908 - 7864.320: 29.2802% ( 115) 00:08:01.802 7864.320 - 7914.732: 30.3352% ( 158) 00:08:01.802 7914.732 - 7965.145: 31.5905% ( 188) 00:08:01.802 7965.145 - 8015.557: 33.2599% ( 250) 00:08:01.802 8015.557 - 8065.969: 34.8825% ( 243) 00:08:01.802 8065.969 - 8116.382: 36.7989% ( 287) 00:08:01.802 8116.382 - 8166.794: 38.7821% ( 297) 00:08:01.802 8166.794 - 8217.206: 40.9722% ( 328) 00:08:01.802 8217.206 - 8267.618: 43.4696% ( 374) 00:08:01.802 8267.618 - 8318.031: 46.1205% ( 397) 00:08:01.802 8318.031 - 8368.443: 48.9583% ( 425) 00:08:01.802 8368.443 - 8418.855: 51.7895% ( 424) 00:08:01.802 8418.855 - 8469.268: 54.5139% ( 408) 00:08:01.802 8469.268 - 8519.680: 57.4319% ( 437) 00:08:01.802 8519.680 - 8570.092: 60.1830% ( 412) 00:08:01.802 8570.092 - 8620.505: 62.8806% ( 404) 00:08:01.802 8620.505 - 8670.917: 65.5783% ( 404) 00:08:01.802 8670.917 - 8721.329: 68.0889% ( 376) 00:08:01.802 8721.329 - 8771.742: 70.2925% ( 330) 00:08:01.802 8771.742 - 8822.154: 72.4359% ( 321) 00:08:01.802 8822.154 - 8872.566: 74.4191% ( 297) 00:08:01.802 8872.566 - 8922.978: 76.0884% ( 250) 00:08:01.802 8922.978 - 8973.391: 77.7377% ( 247) 00:08:01.802 8973.391 - 9023.803: 79.1466% ( 211) 00:08:01.802 9023.803 - 9074.215: 80.3018% ( 173) 00:08:01.802 9074.215 - 9124.628: 81.3235% ( 153) 00:08:01.802 9124.628 - 9175.040: 82.2983% ( 146) 00:08:01.802 9175.040 - 9225.452: 83.1731% ( 131) 00:08:01.802 9225.452 - 9275.865: 83.8542% ( 102) 00:08:01.802 9275.865 - 9326.277: 84.3616% ( 76) 00:08:01.802 9326.277 - 9376.689: 84.8958% ( 80) 00:08:01.802 9376.689 - 9427.102: 85.3900% ( 74) 00:08:01.802 9427.102 - 9477.514: 85.8173% ( 64) 00:08:01.802 9477.514 - 9527.926: 86.2113% ( 59) 00:08:01.802 9527.926 - 9578.338: 86.5852% ( 56) 00:08:01.802 9578.338 - 9628.751: 86.8857% ( 45) 00:08:01.802 9628.751 - 9679.163: 87.2062% ( 48) 00:08:01.802 9679.163 - 9729.575: 87.4733% ( 40) 00:08:01.802 9729.575 - 9779.988: 87.8205% ( 52) 00:08:01.802 9779.988 - 9830.400: 88.1277% ( 46) 00:08:01.802 9830.400 - 9880.812: 88.3747% ( 37) 00:08:01.802 9880.812 - 9931.225: 88.5550% ( 27) 00:08:01.802 9931.225 - 9981.637: 88.7887% ( 35) 00:08:01.802 9981.637 - 10032.049: 89.0091% ( 33) 00:08:01.802 10032.049 - 10082.462: 89.2695% ( 39) 00:08:01.802 10082.462 - 10132.874: 89.4698% ( 30) 00:08:01.802 10132.874 - 10183.286: 89.7035% ( 35) 00:08:01.802 10183.286 - 10233.698: 89.9439% ( 36) 00:08:01.802 10233.698 - 10284.111: 90.2711% ( 49) 00:08:01.802 10284.111 - 10334.523: 90.5248% ( 38) 00:08:01.802 10334.523 - 10384.935: 90.7452% ( 33) 00:08:01.802 10384.935 - 10435.348: 90.9989% ( 38) 00:08:01.802 10435.348 - 10485.760: 91.3128% ( 47) 00:08:01.802 10485.760 - 10536.172: 91.5665% ( 38) 00:08:01.802 10536.172 - 10586.585: 91.8069% ( 36) 00:08:01.802 10586.585 - 10636.997: 92.0807% ( 41) 00:08:01.802 10636.997 - 10687.409: 92.2943% ( 32) 00:08:01.802 10687.409 - 10737.822: 92.4613% ( 25) 00:08:01.802 10737.822 - 10788.234: 92.7083% ( 37) 00:08:01.802 10788.234 - 10838.646: 92.9888% ( 42) 00:08:01.802 10838.646 - 10889.058: 93.2492% ( 39) 00:08:01.802 10889.058 - 10939.471: 93.4295% ( 27) 00:08:01.802 10939.471 - 10989.883: 93.6231% ( 29) 00:08:01.802 10989.883 - 11040.295: 93.8168% ( 29) 00:08:01.802 11040.295 - 11090.708: 93.9904% ( 26) 00:08:01.802 11090.708 - 11141.120: 94.1173% ( 19) 00:08:01.802 11141.120 - 11191.532: 94.2508% ( 20) 00:08:01.802 11191.532 - 11241.945: 94.3843% ( 20) 00:08:01.802 11241.945 - 11292.357: 94.4979% ( 17) 00:08:01.802 11292.357 - 11342.769: 94.6114% ( 17) 00:08:01.802 11342.769 - 11393.182: 94.7049% ( 14) 00:08:01.802 11393.182 - 11443.594: 94.8117% ( 16) 00:08:01.802 11443.594 - 11494.006: 94.9519% ( 21) 00:08:01.802 11494.006 - 11544.418: 95.1055% ( 23) 00:08:01.802 11544.418 - 11594.831: 95.2324% ( 19) 00:08:01.802 11594.831 - 11645.243: 95.3459% ( 17) 00:08:01.802 11645.243 - 11695.655: 95.4928% ( 22) 00:08:01.802 11695.655 - 11746.068: 95.6197% ( 19) 00:08:01.802 11746.068 - 11796.480: 95.7198% ( 15) 00:08:01.802 11796.480 - 11846.892: 95.8066% ( 13) 00:08:01.802 11846.892 - 11897.305: 95.8667% ( 9) 00:08:01.802 11897.305 - 11947.717: 95.9201% ( 8) 00:08:01.802 11947.717 - 11998.129: 95.9802% ( 9) 00:08:01.802 11998.129 - 12048.542: 96.0470% ( 10) 00:08:01.802 12048.542 - 12098.954: 96.1004% ( 8) 00:08:01.802 12098.954 - 12149.366: 96.1672% ( 10) 00:08:01.802 12149.366 - 12199.778: 96.2206% ( 8) 00:08:01.802 12199.778 - 12250.191: 96.2874% ( 10) 00:08:01.802 12250.191 - 12300.603: 96.4009% ( 17) 00:08:01.802 12300.603 - 12351.015: 96.5144% ( 17) 00:08:01.802 12351.015 - 12401.428: 96.5678% ( 8) 00:08:01.802 12401.428 - 12451.840: 96.5946% ( 4) 00:08:01.802 12451.840 - 12502.252: 96.6213% ( 4) 00:08:01.802 12502.252 - 12552.665: 96.6480% ( 4) 00:08:01.802 12552.665 - 12603.077: 96.6880% ( 6) 00:08:01.802 12603.077 - 12653.489: 96.7481% ( 9) 00:08:01.802 12653.489 - 12703.902: 96.8015% ( 8) 00:08:01.802 12703.902 - 12754.314: 96.8616% ( 9) 00:08:01.802 12754.314 - 12804.726: 96.9351% ( 11) 00:08:01.802 12804.726 - 12855.138: 96.9952% ( 9) 00:08:01.802 12855.138 - 12905.551: 97.0486% ( 8) 00:08:01.802 12905.551 - 13006.375: 97.1688% ( 18) 00:08:01.802 13006.375 - 13107.200: 97.2623% ( 14) 00:08:01.802 13107.200 - 13208.025: 97.3624% ( 15) 00:08:01.802 13208.025 - 13308.849: 97.5962% ( 35) 00:08:01.802 13308.849 - 13409.674: 97.7564% ( 24) 00:08:01.802 13409.674 - 13510.498: 97.9033% ( 22) 00:08:01.802 13510.498 - 13611.323: 98.0702% ( 25) 00:08:01.802 13611.323 - 13712.148: 98.2305% ( 24) 00:08:01.802 13712.148 - 13812.972: 98.4108% ( 27) 00:08:01.802 13812.972 - 13913.797: 98.5777% ( 25) 00:08:01.802 13913.797 - 14014.622: 98.7380% ( 24) 00:08:01.802 14014.622 - 14115.446: 98.8381% ( 15) 00:08:01.802 14115.446 - 14216.271: 98.9183% ( 12) 00:08:01.802 14216.271 - 14317.095: 98.9717% ( 8) 00:08:01.802 14317.095 - 14417.920: 99.0118% ( 6) 00:08:01.802 14417.920 - 14518.745: 99.0385% ( 4) 00:08:01.802 14518.745 - 14619.569: 99.0652% ( 4) 00:08:01.802 14619.569 - 14720.394: 99.0919% ( 4) 00:08:01.802 14720.394 - 14821.218: 99.1186% ( 4) 00:08:01.802 14821.218 - 14922.043: 99.1386% ( 3) 00:08:01.802 14922.043 - 15022.868: 99.1453% ( 1) 00:08:01.802 18450.905 - 18551.729: 99.1520% ( 1) 00:08:01.802 18551.729 - 18652.554: 99.1854% ( 5) 00:08:01.802 18652.554 - 18753.378: 99.2121% ( 4) 00:08:01.802 18753.378 - 18854.203: 99.2388% ( 4) 00:08:01.802 18854.203 - 18955.028: 99.2655% ( 4) 00:08:01.802 18955.028 - 19055.852: 99.2922% ( 4) 00:08:01.802 19055.852 - 19156.677: 99.3189% ( 4) 00:08:01.802 19156.677 - 19257.502: 99.3456% ( 4) 00:08:01.802 19257.502 - 19358.326: 99.3723% ( 4) 00:08:01.802 19358.326 - 19459.151: 99.4057% ( 5) 00:08:01.802 19459.151 - 19559.975: 99.4324% ( 4) 00:08:01.802 19559.975 - 19660.800: 99.4591% ( 4) 00:08:01.802 19660.800 - 19761.625: 99.4858% ( 4) 00:08:01.802 19761.625 - 19862.449: 99.5192% ( 5) 00:08:01.802 19862.449 - 19963.274: 99.5459% ( 4) 00:08:01.802 19963.274 - 20064.098: 99.5726% ( 4) 00:08:01.802 24903.680 - 25004.505: 99.5860% ( 2) 00:08:01.802 25004.505 - 25105.329: 99.6127% ( 4) 00:08:01.802 25105.329 - 25206.154: 99.6394% ( 4) 00:08:01.802 25206.154 - 25306.978: 99.6728% ( 5) 00:08:01.802 25306.978 - 25407.803: 99.6995% ( 4) 00:08:01.802 25407.803 - 25508.628: 99.7262% ( 4) 00:08:01.802 25508.628 - 25609.452: 99.7529% ( 4) 00:08:01.802 25609.452 - 25710.277: 99.7796% ( 4) 00:08:01.802 25710.277 - 25811.102: 99.8130% ( 5) 00:08:01.802 25811.102 - 26012.751: 99.8665% ( 8) 00:08:01.802 26012.751 - 26214.400: 99.9265% ( 9) 00:08:01.802 26214.400 - 26416.049: 99.9800% ( 8) 00:08:01.802 26416.049 - 26617.698: 100.0000% ( 3) 00:08:01.802 00:08:01.802 21:39:20 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:03.188 Initializing NVMe Controllers 00:08:03.188 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.188 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.188 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.188 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.188 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:03.188 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:03.188 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:03.188 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:03.188 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:03.188 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:03.188 Initialization complete. Launching workers. 00:08:03.189 ======================================================== 00:08:03.189 Latency(us) 00:08:03.189 Device Information : IOPS MiB/s Average min max 00:08:03.189 PCIE (0000:00:10.0) NSID 1 from core 0: 13926.89 163.21 9202.97 6979.20 32671.37 00:08:03.189 PCIE (0000:00:11.0) NSID 1 from core 0: 13926.89 163.21 9188.91 7149.18 30885.71 00:08:03.189 PCIE (0000:00:13.0) NSID 1 from core 0: 13926.89 163.21 9174.83 7178.48 29870.97 00:08:03.189 PCIE (0000:00:12.0) NSID 1 from core 0: 13926.89 163.21 9160.79 7134.60 28250.24 00:08:03.189 PCIE (0000:00:12.0) NSID 2 from core 0: 13926.89 163.21 9146.81 7305.86 26568.18 00:08:03.189 PCIE (0000:00:12.0) NSID 3 from core 0: 13990.77 163.95 9091.18 7318.12 20542.47 00:08:03.189 ======================================================== 00:08:03.189 Total : 83625.22 979.98 9160.86 6979.20 32671.37 00:08:03.189 00:08:03.189 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:03.189 ================================================================================= 00:08:03.189 1.00000% : 7511.434us 00:08:03.189 10.00000% : 7965.145us 00:08:03.189 25.00000% : 8318.031us 00:08:03.189 50.00000% : 8721.329us 00:08:03.189 75.00000% : 9376.689us 00:08:03.189 90.00000% : 10687.409us 00:08:03.189 95.00000% : 12098.954us 00:08:03.189 98.00000% : 13510.498us 00:08:03.189 99.00000% : 15426.166us 00:08:03.189 99.50000% : 27020.997us 00:08:03.189 99.90000% : 32465.526us 00:08:03.189 99.99000% : 32667.175us 00:08:03.189 99.99900% : 32868.825us 00:08:03.189 99.99990% : 32868.825us 00:08:03.189 99.99999% : 32868.825us 00:08:03.189 00:08:03.189 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:03.189 ================================================================================= 00:08:03.189 1.00000% : 7561.846us 00:08:03.189 10.00000% : 8065.969us 00:08:03.189 25.00000% : 8318.031us 00:08:03.189 50.00000% : 8721.329us 00:08:03.189 75.00000% : 9326.277us 00:08:03.189 90.00000% : 10687.409us 00:08:03.189 95.00000% : 12098.954us 00:08:03.189 98.00000% : 13208.025us 00:08:03.189 99.00000% : 15123.692us 00:08:03.189 99.50000% : 25206.154us 00:08:03.189 99.90000% : 30650.683us 00:08:03.189 99.99000% : 31053.982us 00:08:03.189 99.99900% : 31053.982us 00:08:03.189 99.99990% : 31053.982us 00:08:03.189 99.99999% : 31053.982us 00:08:03.189 00:08:03.189 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:03.189 ================================================================================= 00:08:03.189 1.00000% : 7612.258us 00:08:03.189 10.00000% : 8065.969us 00:08:03.189 25.00000% : 8368.443us 00:08:03.189 50.00000% : 8721.329us 00:08:03.189 75.00000% : 9275.865us 00:08:03.189 90.00000% : 10586.585us 00:08:03.189 95.00000% : 12149.366us 00:08:03.189 98.00000% : 13308.849us 00:08:03.189 99.00000% : 15930.289us 00:08:03.189 99.50000% : 24298.732us 00:08:03.189 99.90000% : 29642.437us 00:08:03.189 99.99000% : 30045.735us 00:08:03.189 99.99900% : 30045.735us 00:08:03.189 99.99990% : 30045.735us 00:08:03.189 99.99999% : 30045.735us 00:08:03.189 00:08:03.189 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:03.189 ================================================================================= 00:08:03.189 1.00000% : 7662.671us 00:08:03.189 10.00000% : 8065.969us 00:08:03.189 25.00000% : 8368.443us 00:08:03.189 50.00000% : 8721.329us 00:08:03.189 75.00000% : 9275.865us 00:08:03.189 90.00000% : 10737.822us 00:08:03.189 95.00000% : 11998.129us 00:08:03.189 98.00000% : 13510.498us 00:08:03.189 99.00000% : 15526.991us 00:08:03.189 99.50000% : 22685.538us 00:08:03.189 99.90000% : 28029.243us 00:08:03.189 99.99000% : 28230.892us 00:08:03.189 99.99900% : 28432.542us 00:08:03.189 99.99990% : 28432.542us 00:08:03.189 99.99999% : 28432.542us 00:08:03.189 00:08:03.189 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:03.189 ================================================================================= 00:08:03.189 1.00000% : 7713.083us 00:08:03.189 10.00000% : 8015.557us 00:08:03.189 25.00000% : 8368.443us 00:08:03.189 50.00000% : 8721.329us 00:08:03.189 75.00000% : 9275.865us 00:08:03.189 90.00000% : 10788.234us 00:08:03.189 95.00000% : 11846.892us 00:08:03.189 98.00000% : 13913.797us 00:08:03.189 99.00000% : 14821.218us 00:08:03.189 99.50000% : 20971.520us 00:08:03.189 99.90000% : 26416.049us 00:08:03.189 99.99000% : 26617.698us 00:08:03.189 99.99900% : 26617.698us 00:08:03.189 99.99990% : 26617.698us 00:08:03.189 99.99999% : 26617.698us 00:08:03.189 00:08:03.189 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:03.189 ================================================================================= 00:08:03.189 1.00000% : 7662.671us 00:08:03.189 10.00000% : 8015.557us 00:08:03.189 25.00000% : 8368.443us 00:08:03.189 50.00000% : 8721.329us 00:08:03.189 75.00000% : 9275.865us 00:08:03.189 90.00000% : 10687.409us 00:08:03.189 95.00000% : 11897.305us 00:08:03.189 98.00000% : 13812.972us 00:08:03.189 99.00000% : 14417.920us 00:08:03.189 99.50000% : 15123.692us 00:08:03.189 99.90000% : 20265.748us 00:08:03.189 99.99000% : 20568.222us 00:08:03.189 99.99900% : 20568.222us 00:08:03.189 99.99990% : 20568.222us 00:08:03.189 99.99999% : 20568.222us 00:08:03.189 00:08:03.189 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:03.189 ============================================================================== 00:08:03.189 Range in us Cumulative IO count 00:08:03.189 6956.898 - 7007.311: 0.0143% ( 2) 00:08:03.189 7007.311 - 7057.723: 0.0358% ( 3) 00:08:03.189 7057.723 - 7108.135: 0.0573% ( 3) 00:08:03.189 7108.135 - 7158.548: 0.0788% ( 3) 00:08:03.189 7158.548 - 7208.960: 0.0932% ( 2) 00:08:03.189 7208.960 - 7259.372: 0.1147% ( 3) 00:08:03.189 7259.372 - 7309.785: 0.1720% ( 8) 00:08:03.189 7309.785 - 7360.197: 0.2795% ( 15) 00:08:03.189 7360.197 - 7410.609: 0.4731% ( 27) 00:08:03.189 7410.609 - 7461.022: 0.6666% ( 27) 00:08:03.189 7461.022 - 7511.434: 1.1253% ( 64) 00:08:03.189 7511.434 - 7561.846: 1.5052% ( 53) 00:08:03.189 7561.846 - 7612.258: 2.1216% ( 86) 00:08:03.189 7612.258 - 7662.671: 2.9530% ( 116) 00:08:03.189 7662.671 - 7713.083: 4.0281% ( 150) 00:08:03.189 7713.083 - 7763.495: 4.9742% ( 132) 00:08:03.189 7763.495 - 7813.908: 6.2142% ( 173) 00:08:03.189 7813.908 - 7864.320: 7.5545% ( 187) 00:08:03.189 7864.320 - 7914.732: 8.9593% ( 196) 00:08:03.189 7914.732 - 7965.145: 10.5075% ( 216) 00:08:03.189 7965.145 - 8015.557: 12.1273% ( 226) 00:08:03.189 8015.557 - 8065.969: 13.8403% ( 239) 00:08:03.189 8065.969 - 8116.382: 16.3561% ( 351) 00:08:03.189 8116.382 - 8166.794: 18.6855% ( 325) 00:08:03.189 8166.794 - 8217.206: 21.3374% ( 370) 00:08:03.189 8217.206 - 8267.618: 24.3263% ( 417) 00:08:03.189 8267.618 - 8318.031: 27.2004% ( 401) 00:08:03.189 8318.031 - 8368.443: 30.4257% ( 450) 00:08:03.189 8368.443 - 8418.855: 33.5077% ( 430) 00:08:03.189 8418.855 - 8469.268: 36.1955% ( 375) 00:08:03.189 8469.268 - 8519.680: 39.0410% ( 397) 00:08:03.189 8519.680 - 8570.092: 41.9653% ( 408) 00:08:03.189 8570.092 - 8620.505: 45.0688% ( 433) 00:08:03.189 8620.505 - 8670.917: 48.1078% ( 424) 00:08:03.189 8670.917 - 8721.329: 50.9676% ( 399) 00:08:03.189 8721.329 - 8771.742: 53.5837% ( 365) 00:08:03.189 8771.742 - 8822.154: 56.0923% ( 350) 00:08:03.189 8822.154 - 8872.566: 58.4791% ( 333) 00:08:03.189 8872.566 - 8922.978: 60.7368% ( 315) 00:08:03.189 8922.978 - 8973.391: 62.8512% ( 295) 00:08:03.189 8973.391 - 9023.803: 64.9441% ( 292) 00:08:03.189 9023.803 - 9074.215: 66.9008% ( 273) 00:08:03.189 9074.215 - 9124.628: 68.7572% ( 259) 00:08:03.189 9124.628 - 9175.040: 70.5275% ( 247) 00:08:03.189 9175.040 - 9225.452: 72.1187% ( 222) 00:08:03.189 9225.452 - 9275.865: 73.5522% ( 200) 00:08:03.189 9275.865 - 9326.277: 74.8280% ( 178) 00:08:03.189 9326.277 - 9376.689: 76.0178% ( 166) 00:08:03.189 9376.689 - 9427.102: 77.0714% ( 147) 00:08:03.189 9427.102 - 9477.514: 78.0892% ( 142) 00:08:03.189 9477.514 - 9527.926: 79.0711% ( 137) 00:08:03.189 9527.926 - 9578.338: 79.9384% ( 121) 00:08:03.189 9578.338 - 9628.751: 80.7124% ( 108) 00:08:03.189 9628.751 - 9679.163: 81.5654% ( 119) 00:08:03.189 9679.163 - 9729.575: 82.1889% ( 87) 00:08:03.189 9729.575 - 9779.988: 82.8483% ( 92) 00:08:03.189 9779.988 - 9830.400: 83.3859% ( 75) 00:08:03.189 9830.400 - 9880.812: 83.9235% ( 75) 00:08:03.189 9880.812 - 9931.225: 84.4037% ( 67) 00:08:03.189 9931.225 - 9981.637: 84.8552% ( 63) 00:08:03.189 9981.637 - 10032.049: 85.2853% ( 60) 00:08:03.189 10032.049 - 10082.462: 85.8228% ( 75) 00:08:03.189 10082.462 - 10132.874: 86.3030% ( 67) 00:08:03.189 10132.874 - 10183.286: 86.7044% ( 56) 00:08:03.189 10183.286 - 10233.698: 87.1703% ( 65) 00:08:03.189 10233.698 - 10284.111: 87.6218% ( 63) 00:08:03.189 10284.111 - 10334.523: 88.0591% ( 61) 00:08:03.189 10334.523 - 10384.935: 88.4461% ( 54) 00:08:03.189 10384.935 - 10435.348: 88.7686% ( 45) 00:08:03.189 10435.348 - 10485.760: 89.0052% ( 33) 00:08:03.189 10485.760 - 10536.172: 89.2775% ( 38) 00:08:03.189 10536.172 - 10586.585: 89.5642% ( 40) 00:08:03.190 10586.585 - 10636.997: 89.8438% ( 39) 00:08:03.190 10636.997 - 10687.409: 90.1018% ( 36) 00:08:03.190 10687.409 - 10737.822: 90.3383% ( 33) 00:08:03.190 10737.822 - 10788.234: 90.5462% ( 29) 00:08:03.190 10788.234 - 10838.646: 90.7827% ( 33) 00:08:03.190 10838.646 - 10889.058: 91.0694% ( 40) 00:08:03.190 10889.058 - 10939.471: 91.3417% ( 38) 00:08:03.190 10939.471 - 10989.883: 91.5209% ( 25) 00:08:03.190 10989.883 - 11040.295: 91.7001% ( 25) 00:08:03.190 11040.295 - 11090.708: 91.9366% ( 33) 00:08:03.190 11090.708 - 11141.120: 92.0872% ( 21) 00:08:03.190 11141.120 - 11191.532: 92.3452% ( 36) 00:08:03.190 11191.532 - 11241.945: 92.5530% ( 29) 00:08:03.190 11241.945 - 11292.357: 92.7609% ( 29) 00:08:03.190 11292.357 - 11342.769: 92.9329% ( 24) 00:08:03.190 11342.769 - 11393.182: 93.1981% ( 37) 00:08:03.190 11393.182 - 11443.594: 93.3701% ( 24) 00:08:03.190 11443.594 - 11494.006: 93.5278% ( 22) 00:08:03.190 11494.006 - 11544.418: 93.6568% ( 18) 00:08:03.190 11544.418 - 11594.831: 93.7500% ( 13) 00:08:03.190 11594.831 - 11645.243: 93.8933% ( 20) 00:08:03.190 11645.243 - 11695.655: 94.0367% ( 20) 00:08:03.190 11695.655 - 11746.068: 94.1585% ( 17) 00:08:03.190 11746.068 - 11796.480: 94.3091% ( 21) 00:08:03.190 11796.480 - 11846.892: 94.4166% ( 15) 00:08:03.190 11846.892 - 11897.305: 94.5241% ( 15) 00:08:03.190 11897.305 - 11947.717: 94.6316% ( 15) 00:08:03.190 11947.717 - 11998.129: 94.7821% ( 21) 00:08:03.190 11998.129 - 12048.542: 94.8896% ( 15) 00:08:03.190 12048.542 - 12098.954: 95.0186% ( 18) 00:08:03.190 12098.954 - 12149.366: 95.1548% ( 19) 00:08:03.190 12149.366 - 12199.778: 95.2982% ( 20) 00:08:03.190 12199.778 - 12250.191: 95.4343% ( 19) 00:08:03.190 12250.191 - 12300.603: 95.5204% ( 12) 00:08:03.190 12300.603 - 12351.015: 95.5920% ( 10) 00:08:03.190 12351.015 - 12401.428: 95.7067% ( 16) 00:08:03.190 12401.428 - 12451.840: 95.8142% ( 15) 00:08:03.190 12451.840 - 12502.252: 95.9719% ( 22) 00:08:03.190 12502.252 - 12552.665: 96.1296% ( 22) 00:08:03.190 12552.665 - 12603.077: 96.2514% ( 17) 00:08:03.190 12603.077 - 12653.489: 96.3446% ( 13) 00:08:03.190 12653.489 - 12703.902: 96.4808% ( 19) 00:08:03.190 12703.902 - 12754.314: 96.6026% ( 17) 00:08:03.190 12754.314 - 12804.726: 96.6958% ( 13) 00:08:03.190 12804.726 - 12855.138: 96.7818% ( 12) 00:08:03.190 12855.138 - 12905.551: 96.8893% ( 15) 00:08:03.190 12905.551 - 13006.375: 97.1474% ( 36) 00:08:03.190 13006.375 - 13107.200: 97.3337% ( 26) 00:08:03.190 13107.200 - 13208.025: 97.5201% ( 26) 00:08:03.190 13208.025 - 13308.849: 97.6993% ( 25) 00:08:03.190 13308.849 - 13409.674: 97.8426% ( 20) 00:08:03.190 13409.674 - 13510.498: 98.0505% ( 29) 00:08:03.190 13510.498 - 13611.323: 98.2225% ( 24) 00:08:03.190 13611.323 - 13712.148: 98.3157% ( 13) 00:08:03.190 13712.148 - 13812.972: 98.3945% ( 11) 00:08:03.190 13812.972 - 13913.797: 98.4160% ( 3) 00:08:03.190 13913.797 - 14014.622: 98.4662% ( 7) 00:08:03.190 14014.622 - 14115.446: 98.5307% ( 9) 00:08:03.190 14115.446 - 14216.271: 98.6454% ( 16) 00:08:03.190 14216.271 - 14317.095: 98.7027% ( 8) 00:08:03.190 14317.095 - 14417.920: 98.7457% ( 6) 00:08:03.190 14417.920 - 14518.745: 98.7600% ( 2) 00:08:03.190 14518.745 - 14619.569: 98.7959% ( 5) 00:08:03.190 14619.569 - 14720.394: 98.8174% ( 3) 00:08:03.190 14720.394 - 14821.218: 98.8245% ( 1) 00:08:03.190 14821.218 - 14922.043: 98.8460% ( 3) 00:08:03.190 14922.043 - 15022.868: 98.8747% ( 4) 00:08:03.190 15022.868 - 15123.692: 98.9034% ( 4) 00:08:03.190 15123.692 - 15224.517: 98.9536% ( 7) 00:08:03.190 15224.517 - 15325.342: 98.9966% ( 6) 00:08:03.190 15325.342 - 15426.166: 99.0467% ( 7) 00:08:03.190 15426.166 - 15526.991: 99.0826% ( 5) 00:08:03.190 25609.452 - 25710.277: 99.0897% ( 1) 00:08:03.190 25710.277 - 25811.102: 99.1256% ( 5) 00:08:03.190 25811.102 - 26012.751: 99.2474% ( 17) 00:08:03.190 26012.751 - 26214.400: 99.3549% ( 15) 00:08:03.190 26214.400 - 26416.049: 99.4123% ( 8) 00:08:03.190 26617.698 - 26819.348: 99.4696% ( 8) 00:08:03.190 26819.348 - 27020.997: 99.5269% ( 8) 00:08:03.190 27020.997 - 27222.646: 99.5413% ( 2) 00:08:03.190 30852.332 - 31053.982: 99.5628% ( 3) 00:08:03.190 31053.982 - 31255.631: 99.6130% ( 7) 00:08:03.190 31255.631 - 31457.280: 99.6703% ( 8) 00:08:03.190 31457.280 - 31658.929: 99.7205% ( 7) 00:08:03.190 31658.929 - 31860.578: 99.7706% ( 7) 00:08:03.190 31860.578 - 32062.228: 99.8280% ( 8) 00:08:03.190 32062.228 - 32263.877: 99.8925% ( 9) 00:08:03.190 32263.877 - 32465.526: 99.9498% ( 8) 00:08:03.190 32465.526 - 32667.175: 99.9928% ( 6) 00:08:03.190 32667.175 - 32868.825: 100.0000% ( 1) 00:08:03.190 00:08:03.190 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:03.190 ============================================================================== 00:08:03.190 Range in us Cumulative IO count 00:08:03.190 7108.135 - 7158.548: 0.0072% ( 1) 00:08:03.190 7259.372 - 7309.785: 0.0287% ( 3) 00:08:03.190 7309.785 - 7360.197: 0.0932% ( 9) 00:08:03.190 7360.197 - 7410.609: 0.2437% ( 21) 00:08:03.190 7410.609 - 7461.022: 0.4372% ( 27) 00:08:03.190 7461.022 - 7511.434: 0.8171% ( 53) 00:08:03.190 7511.434 - 7561.846: 1.0393% ( 31) 00:08:03.190 7561.846 - 7612.258: 1.3188% ( 39) 00:08:03.190 7612.258 - 7662.671: 1.6485% ( 46) 00:08:03.190 7662.671 - 7713.083: 2.0427% ( 55) 00:08:03.190 7713.083 - 7763.495: 2.6018% ( 78) 00:08:03.190 7763.495 - 7813.908: 3.4762% ( 122) 00:08:03.190 7813.908 - 7864.320: 4.6517% ( 164) 00:08:03.190 7864.320 - 7914.732: 6.1138% ( 204) 00:08:03.190 7914.732 - 7965.145: 7.9415% ( 255) 00:08:03.190 7965.145 - 8015.557: 9.5829% ( 229) 00:08:03.190 8015.557 - 8065.969: 11.8693% ( 319) 00:08:03.190 8065.969 - 8116.382: 14.2489% ( 332) 00:08:03.190 8116.382 - 8166.794: 16.7431% ( 348) 00:08:03.190 8166.794 - 8217.206: 19.4094% ( 372) 00:08:03.190 8217.206 - 8267.618: 22.3409% ( 409) 00:08:03.190 8267.618 - 8318.031: 25.1003% ( 385) 00:08:03.190 8318.031 - 8368.443: 28.2683% ( 442) 00:08:03.190 8368.443 - 8418.855: 31.8664% ( 502) 00:08:03.190 8418.855 - 8469.268: 35.3569% ( 487) 00:08:03.190 8469.268 - 8519.680: 38.5823% ( 450) 00:08:03.190 8519.680 - 8570.092: 42.1302% ( 495) 00:08:03.190 8570.092 - 8620.505: 45.1333% ( 419) 00:08:03.190 8620.505 - 8670.917: 48.0576% ( 408) 00:08:03.190 8670.917 - 8721.329: 51.0034% ( 411) 00:08:03.190 8721.329 - 8771.742: 54.0998% ( 432) 00:08:03.190 8771.742 - 8822.154: 57.2033% ( 433) 00:08:03.190 8822.154 - 8872.566: 60.0774% ( 401) 00:08:03.190 8872.566 - 8922.978: 62.5502% ( 345) 00:08:03.190 8922.978 - 8973.391: 64.8796% ( 325) 00:08:03.190 8973.391 - 9023.803: 67.0442% ( 302) 00:08:03.190 9023.803 - 9074.215: 68.7357% ( 236) 00:08:03.190 9074.215 - 9124.628: 70.4774% ( 243) 00:08:03.190 9124.628 - 9175.040: 71.9037% ( 199) 00:08:03.190 9175.040 - 9225.452: 73.1293% ( 171) 00:08:03.190 9225.452 - 9275.865: 74.2474% ( 156) 00:08:03.190 9275.865 - 9326.277: 75.3082% ( 148) 00:08:03.190 9326.277 - 9376.689: 76.2256% ( 128) 00:08:03.190 9376.689 - 9427.102: 77.1359% ( 127) 00:08:03.190 9427.102 - 9477.514: 77.9888% ( 119) 00:08:03.190 9477.514 - 9527.926: 79.0353% ( 146) 00:08:03.190 9527.926 - 9578.338: 80.0100% ( 136) 00:08:03.190 9578.338 - 9628.751: 81.3647% ( 189) 00:08:03.190 9628.751 - 9679.163: 82.2391% ( 122) 00:08:03.190 9679.163 - 9729.575: 83.0777% ( 117) 00:08:03.190 9729.575 - 9779.988: 83.8661% ( 110) 00:08:03.190 9779.988 - 9830.400: 84.5829% ( 100) 00:08:03.190 9830.400 - 9880.812: 85.1491% ( 79) 00:08:03.190 9880.812 - 9931.225: 85.6723% ( 73) 00:08:03.190 9931.225 - 9981.637: 86.1597% ( 68) 00:08:03.190 9981.637 - 10032.049: 86.6112% ( 63) 00:08:03.190 10032.049 - 10082.462: 87.0485% ( 61) 00:08:03.190 10082.462 - 10132.874: 87.3782% ( 46) 00:08:03.190 10132.874 - 10183.286: 87.8082% ( 60) 00:08:03.190 10183.286 - 10233.698: 88.1522% ( 48) 00:08:03.190 10233.698 - 10284.111: 88.4174% ( 37) 00:08:03.190 10284.111 - 10334.523: 88.6826% ( 37) 00:08:03.190 10334.523 - 10384.935: 89.0482% ( 51) 00:08:03.190 10384.935 - 10435.348: 89.2919% ( 34) 00:08:03.190 10435.348 - 10485.760: 89.4782% ( 26) 00:08:03.190 10485.760 - 10536.172: 89.6502% ( 24) 00:08:03.190 10536.172 - 10586.585: 89.8222% ( 24) 00:08:03.190 10586.585 - 10636.997: 89.9369% ( 16) 00:08:03.190 10636.997 - 10687.409: 90.0731% ( 19) 00:08:03.190 10687.409 - 10737.822: 90.1878% ( 16) 00:08:03.190 10737.822 - 10788.234: 90.3383% ( 21) 00:08:03.190 10788.234 - 10838.646: 90.4458% ( 15) 00:08:03.190 10838.646 - 10889.058: 90.5963% ( 21) 00:08:03.190 10889.058 - 10939.471: 90.7540% ( 22) 00:08:03.190 10939.471 - 10989.883: 90.9475% ( 27) 00:08:03.190 10989.883 - 11040.295: 91.1339% ( 26) 00:08:03.190 11040.295 - 11090.708: 91.3776% ( 34) 00:08:03.190 11090.708 - 11141.120: 91.5998% ( 31) 00:08:03.190 11141.120 - 11191.532: 91.8005% ( 28) 00:08:03.190 11191.532 - 11241.945: 92.0728% ( 38) 00:08:03.190 11241.945 - 11292.357: 92.2663% ( 27) 00:08:03.190 11292.357 - 11342.769: 92.5029% ( 33) 00:08:03.190 11342.769 - 11393.182: 92.7107% ( 29) 00:08:03.190 11393.182 - 11443.594: 92.9186% ( 29) 00:08:03.190 11443.594 - 11494.006: 93.1408% ( 31) 00:08:03.190 11494.006 - 11544.418: 93.3271% ( 26) 00:08:03.190 11544.418 - 11594.831: 93.5708% ( 34) 00:08:03.190 11594.831 - 11645.243: 93.8217% ( 35) 00:08:03.190 11645.243 - 11695.655: 93.9937% ( 24) 00:08:03.190 11695.655 - 11746.068: 94.1299% ( 19) 00:08:03.191 11746.068 - 11796.480: 94.2446% ( 16) 00:08:03.191 11796.480 - 11846.892: 94.3449% ( 14) 00:08:03.191 11846.892 - 11897.305: 94.4596% ( 16) 00:08:03.191 11897.305 - 11947.717: 94.5671% ( 15) 00:08:03.191 11947.717 - 11998.129: 94.6961% ( 18) 00:08:03.191 11998.129 - 12048.542: 94.8610% ( 23) 00:08:03.191 12048.542 - 12098.954: 95.0545% ( 27) 00:08:03.191 12098.954 - 12149.366: 95.2337% ( 25) 00:08:03.191 12149.366 - 12199.778: 95.3555% ( 17) 00:08:03.191 12199.778 - 12250.191: 95.4917% ( 19) 00:08:03.191 12250.191 - 12300.603: 95.6135% ( 17) 00:08:03.191 12300.603 - 12351.015: 95.8429% ( 32) 00:08:03.191 12351.015 - 12401.428: 96.0149% ( 24) 00:08:03.191 12401.428 - 12451.840: 96.0722% ( 8) 00:08:03.191 12451.840 - 12502.252: 96.1296% ( 8) 00:08:03.191 12502.252 - 12552.665: 96.2013% ( 10) 00:08:03.191 12552.665 - 12603.077: 96.3016% ( 14) 00:08:03.191 12603.077 - 12653.489: 96.4306% ( 18) 00:08:03.191 12653.489 - 12703.902: 96.5740% ( 20) 00:08:03.191 12703.902 - 12754.314: 96.7173% ( 20) 00:08:03.191 12754.314 - 12804.726: 96.8678% ( 21) 00:08:03.191 12804.726 - 12855.138: 97.0614% ( 27) 00:08:03.191 12855.138 - 12905.551: 97.2119% ( 21) 00:08:03.191 12905.551 - 13006.375: 97.4914% ( 39) 00:08:03.191 13006.375 - 13107.200: 97.7924% ( 42) 00:08:03.191 13107.200 - 13208.025: 98.0290% ( 33) 00:08:03.191 13208.025 - 13308.849: 98.2081% ( 25) 00:08:03.191 13308.849 - 13409.674: 98.3873% ( 25) 00:08:03.191 13409.674 - 13510.498: 98.4877% ( 14) 00:08:03.191 13510.498 - 13611.323: 98.5307% ( 6) 00:08:03.191 13611.323 - 13712.148: 98.5737% ( 6) 00:08:03.191 13712.148 - 13812.972: 98.6167% ( 6) 00:08:03.191 13812.972 - 13913.797: 98.6239% ( 1) 00:08:03.191 14619.569 - 14720.394: 98.6740% ( 7) 00:08:03.191 14720.394 - 14821.218: 98.7242% ( 7) 00:08:03.191 14821.218 - 14922.043: 98.8030% ( 11) 00:08:03.191 14922.043 - 15022.868: 98.9536% ( 21) 00:08:03.191 15022.868 - 15123.692: 99.0037% ( 7) 00:08:03.191 15123.692 - 15224.517: 99.0539% ( 7) 00:08:03.191 15224.517 - 15325.342: 99.0826% ( 4) 00:08:03.191 23693.785 - 23794.609: 99.0897% ( 1) 00:08:03.191 23794.609 - 23895.434: 99.1184% ( 4) 00:08:03.191 23895.434 - 23996.258: 99.1471% ( 4) 00:08:03.191 23996.258 - 24097.083: 99.1757% ( 4) 00:08:03.191 24097.083 - 24197.908: 99.2116% ( 5) 00:08:03.191 24197.908 - 24298.732: 99.2403% ( 4) 00:08:03.191 24298.732 - 24399.557: 99.2689% ( 4) 00:08:03.191 24399.557 - 24500.382: 99.2976% ( 4) 00:08:03.191 24500.382 - 24601.206: 99.3263% ( 4) 00:08:03.191 24601.206 - 24702.031: 99.3549% ( 4) 00:08:03.191 24702.031 - 24802.855: 99.3836% ( 4) 00:08:03.191 24802.855 - 24903.680: 99.4194% ( 5) 00:08:03.191 24903.680 - 25004.505: 99.4481% ( 4) 00:08:03.191 25004.505 - 25105.329: 99.4696% ( 3) 00:08:03.191 25105.329 - 25206.154: 99.5054% ( 5) 00:08:03.191 25206.154 - 25306.978: 99.5341% ( 4) 00:08:03.191 25306.978 - 25407.803: 99.5413% ( 1) 00:08:03.191 29239.138 - 29440.788: 99.5628% ( 3) 00:08:03.191 29440.788 - 29642.437: 99.6273% ( 9) 00:08:03.191 29642.437 - 29844.086: 99.6846% ( 8) 00:08:03.191 29844.086 - 30045.735: 99.7348% ( 7) 00:08:03.191 30045.735 - 30247.385: 99.7993% ( 9) 00:08:03.191 30247.385 - 30449.034: 99.8638% ( 9) 00:08:03.191 30449.034 - 30650.683: 99.9212% ( 8) 00:08:03.191 30650.683 - 30852.332: 99.9857% ( 9) 00:08:03.191 30852.332 - 31053.982: 100.0000% ( 2) 00:08:03.191 00:08:03.191 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:03.191 ============================================================================== 00:08:03.191 Range in us Cumulative IO count 00:08:03.191 7158.548 - 7208.960: 0.0215% ( 3) 00:08:03.191 7208.960 - 7259.372: 0.0717% ( 7) 00:08:03.191 7259.372 - 7309.785: 0.1433% ( 10) 00:08:03.191 7309.785 - 7360.197: 0.2222% ( 11) 00:08:03.191 7360.197 - 7410.609: 0.3297% ( 15) 00:08:03.191 7410.609 - 7461.022: 0.4372% ( 15) 00:08:03.191 7461.022 - 7511.434: 0.6164% ( 25) 00:08:03.191 7511.434 - 7561.846: 0.8028% ( 26) 00:08:03.191 7561.846 - 7612.258: 1.0894% ( 40) 00:08:03.191 7612.258 - 7662.671: 1.5123% ( 59) 00:08:03.191 7662.671 - 7713.083: 1.9209% ( 57) 00:08:03.191 7713.083 - 7763.495: 2.6304% ( 99) 00:08:03.191 7763.495 - 7813.908: 3.5264% ( 125) 00:08:03.191 7813.908 - 7864.320: 4.7018% ( 164) 00:08:03.191 7864.320 - 7914.732: 6.0206% ( 184) 00:08:03.191 7914.732 - 7965.145: 7.8412% ( 254) 00:08:03.191 7965.145 - 8015.557: 9.7190% ( 262) 00:08:03.191 8015.557 - 8065.969: 11.9051% ( 305) 00:08:03.191 8065.969 - 8116.382: 14.2489% ( 327) 00:08:03.191 8116.382 - 8166.794: 16.3704% ( 296) 00:08:03.191 8166.794 - 8217.206: 19.1299% ( 385) 00:08:03.191 8217.206 - 8267.618: 21.8105% ( 374) 00:08:03.191 8267.618 - 8318.031: 24.4983% ( 375) 00:08:03.191 8318.031 - 8368.443: 27.4728% ( 415) 00:08:03.191 8368.443 - 8418.855: 30.6264% ( 440) 00:08:03.191 8418.855 - 8469.268: 34.1026% ( 485) 00:08:03.191 8469.268 - 8519.680: 37.5788% ( 485) 00:08:03.191 8519.680 - 8570.092: 41.4493% ( 540) 00:08:03.191 8570.092 - 8620.505: 45.3412% ( 543) 00:08:03.191 8620.505 - 8670.917: 48.5450% ( 447) 00:08:03.191 8670.917 - 8721.329: 51.9352% ( 473) 00:08:03.191 8721.329 - 8771.742: 55.2537% ( 463) 00:08:03.191 8771.742 - 8822.154: 58.3931% ( 438) 00:08:03.191 8822.154 - 8872.566: 61.1167% ( 380) 00:08:03.191 8872.566 - 8922.978: 63.7615% ( 369) 00:08:03.191 8922.978 - 8973.391: 65.9834% ( 310) 00:08:03.191 8973.391 - 9023.803: 68.0189% ( 284) 00:08:03.191 9023.803 - 9074.215: 69.8251% ( 252) 00:08:03.191 9074.215 - 9124.628: 71.3876% ( 218) 00:08:03.191 9124.628 - 9175.040: 72.9214% ( 214) 00:08:03.191 9175.040 - 9225.452: 74.4194% ( 209) 00:08:03.191 9225.452 - 9275.865: 75.7812% ( 190) 00:08:03.191 9275.865 - 9326.277: 76.9424% ( 162) 00:08:03.191 9326.277 - 9376.689: 77.9386% ( 139) 00:08:03.191 9376.689 - 9427.102: 78.9994% ( 148) 00:08:03.191 9427.102 - 9477.514: 79.9384% ( 131) 00:08:03.191 9477.514 - 9527.926: 80.9991% ( 148) 00:08:03.191 9527.926 - 9578.338: 81.8306% ( 116) 00:08:03.191 9578.338 - 9628.751: 82.4756% ( 90) 00:08:03.191 9628.751 - 9679.163: 83.0419% ( 79) 00:08:03.191 9679.163 - 9729.575: 83.7299% ( 96) 00:08:03.191 9729.575 - 9779.988: 84.3105% ( 81) 00:08:03.191 9779.988 - 9830.400: 84.7907% ( 67) 00:08:03.191 9830.400 - 9880.812: 85.2709% ( 67) 00:08:03.191 9880.812 - 9931.225: 85.6866% ( 58) 00:08:03.191 9931.225 - 9981.637: 86.1382% ( 63) 00:08:03.191 9981.637 - 10032.049: 86.4392% ( 42) 00:08:03.191 10032.049 - 10082.462: 86.8478% ( 57) 00:08:03.191 10082.462 - 10132.874: 87.1488% ( 42) 00:08:03.191 10132.874 - 10183.286: 87.4283% ( 39) 00:08:03.191 10183.286 - 10233.698: 87.7150% ( 40) 00:08:03.191 10233.698 - 10284.111: 88.0161% ( 42) 00:08:03.191 10284.111 - 10334.523: 88.3458% ( 46) 00:08:03.191 10334.523 - 10384.935: 88.7543% ( 57) 00:08:03.191 10384.935 - 10435.348: 89.1700% ( 58) 00:08:03.191 10435.348 - 10485.760: 89.4639% ( 41) 00:08:03.191 10485.760 - 10536.172: 89.7434% ( 39) 00:08:03.191 10536.172 - 10586.585: 90.0373% ( 41) 00:08:03.191 10586.585 - 10636.997: 90.3526% ( 44) 00:08:03.191 10636.997 - 10687.409: 90.5677% ( 30) 00:08:03.191 10687.409 - 10737.822: 90.7325% ( 23) 00:08:03.191 10737.822 - 10788.234: 90.8687% ( 19) 00:08:03.191 10788.234 - 10838.646: 91.0120% ( 20) 00:08:03.191 10838.646 - 10889.058: 91.1482% ( 19) 00:08:03.191 10889.058 - 10939.471: 91.2844% ( 19) 00:08:03.191 10939.471 - 10989.883: 91.4062% ( 17) 00:08:03.191 10989.883 - 11040.295: 91.4994% ( 13) 00:08:03.191 11040.295 - 11090.708: 91.5998% ( 14) 00:08:03.191 11090.708 - 11141.120: 91.6714% ( 10) 00:08:03.191 11141.120 - 11191.532: 91.7431% ( 10) 00:08:03.191 11191.532 - 11241.945: 91.8220% ( 11) 00:08:03.191 11241.945 - 11292.357: 91.9438% ( 17) 00:08:03.191 11292.357 - 11342.769: 92.0657% ( 17) 00:08:03.191 11342.769 - 11393.182: 92.2305% ( 23) 00:08:03.191 11393.182 - 11443.594: 92.4312% ( 28) 00:08:03.191 11443.594 - 11494.006: 92.6104% ( 25) 00:08:03.191 11494.006 - 11544.418: 92.8684% ( 36) 00:08:03.191 11544.418 - 11594.831: 93.0763% ( 29) 00:08:03.191 11594.831 - 11645.243: 93.3128% ( 33) 00:08:03.191 11645.243 - 11695.655: 93.5350% ( 31) 00:08:03.191 11695.655 - 11746.068: 93.7787% ( 34) 00:08:03.191 11746.068 - 11796.480: 93.9364% ( 22) 00:08:03.191 11796.480 - 11846.892: 94.0654% ( 18) 00:08:03.191 11846.892 - 11897.305: 94.2087% ( 20) 00:08:03.191 11897.305 - 11947.717: 94.3449% ( 19) 00:08:03.191 11947.717 - 11998.129: 94.5528% ( 29) 00:08:03.191 11998.129 - 12048.542: 94.7319% ( 25) 00:08:03.191 12048.542 - 12098.954: 94.8825% ( 21) 00:08:03.191 12098.954 - 12149.366: 95.0330% ( 21) 00:08:03.191 12149.366 - 12199.778: 95.1692% ( 19) 00:08:03.191 12199.778 - 12250.191: 95.3268% ( 22) 00:08:03.191 12250.191 - 12300.603: 95.4845% ( 22) 00:08:03.191 12300.603 - 12351.015: 95.6207% ( 19) 00:08:03.191 12351.015 - 12401.428: 95.7354% ( 16) 00:08:03.191 12401.428 - 12451.840: 95.8787% ( 20) 00:08:03.191 12451.840 - 12502.252: 96.0077% ( 18) 00:08:03.191 12502.252 - 12552.665: 96.1368% ( 18) 00:08:03.191 12552.665 - 12603.077: 96.2801% ( 20) 00:08:03.191 12603.077 - 12653.489: 96.4235% ( 20) 00:08:03.191 12653.489 - 12703.902: 96.5668% ( 20) 00:08:03.191 12703.902 - 12754.314: 96.8320% ( 37) 00:08:03.191 12754.314 - 12804.726: 97.0470% ( 30) 00:08:03.191 12804.726 - 12855.138: 97.2620% ( 30) 00:08:03.191 12855.138 - 12905.551: 97.3624% ( 14) 00:08:03.192 12905.551 - 13006.375: 97.5487% ( 26) 00:08:03.192 13006.375 - 13107.200: 97.6993% ( 21) 00:08:03.192 13107.200 - 13208.025: 97.8784% ( 25) 00:08:03.192 13208.025 - 13308.849: 98.0146% ( 19) 00:08:03.192 13308.849 - 13409.674: 98.1580% ( 20) 00:08:03.192 13409.674 - 13510.498: 98.3515% ( 27) 00:08:03.192 13510.498 - 13611.323: 98.5092% ( 22) 00:08:03.192 13611.323 - 13712.148: 98.6024% ( 13) 00:08:03.192 13712.148 - 13812.972: 98.6239% ( 3) 00:08:03.192 14922.043 - 15022.868: 98.6454% ( 3) 00:08:03.192 15022.868 - 15123.692: 98.6884% ( 6) 00:08:03.192 15123.692 - 15224.517: 98.7385% ( 7) 00:08:03.192 15224.517 - 15325.342: 98.7744% ( 5) 00:08:03.192 15325.342 - 15426.166: 98.8174% ( 6) 00:08:03.192 15426.166 - 15526.991: 98.8819% ( 9) 00:08:03.192 15526.991 - 15627.815: 98.9106% ( 4) 00:08:03.192 15627.815 - 15728.640: 98.9536% ( 6) 00:08:03.192 15728.640 - 15829.465: 98.9966% ( 6) 00:08:03.192 15829.465 - 15930.289: 99.0396% ( 6) 00:08:03.192 15930.289 - 16031.114: 99.0826% ( 6) 00:08:03.192 22786.363 - 22887.188: 99.1041% ( 3) 00:08:03.192 22887.188 - 22988.012: 99.1327% ( 4) 00:08:03.192 22988.012 - 23088.837: 99.1614% ( 4) 00:08:03.192 23088.837 - 23189.662: 99.1901% ( 4) 00:08:03.192 23189.662 - 23290.486: 99.2188% ( 4) 00:08:03.192 23290.486 - 23391.311: 99.2546% ( 5) 00:08:03.192 23391.311 - 23492.135: 99.2833% ( 4) 00:08:03.192 23492.135 - 23592.960: 99.3119% ( 4) 00:08:03.192 23592.960 - 23693.785: 99.3406% ( 4) 00:08:03.192 23693.785 - 23794.609: 99.3764% ( 5) 00:08:03.192 23794.609 - 23895.434: 99.4051% ( 4) 00:08:03.192 23895.434 - 23996.258: 99.4338% ( 4) 00:08:03.192 23996.258 - 24097.083: 99.4624% ( 4) 00:08:03.192 24097.083 - 24197.908: 99.4911% ( 4) 00:08:03.192 24197.908 - 24298.732: 99.5198% ( 4) 00:08:03.192 24298.732 - 24399.557: 99.5413% ( 3) 00:08:03.192 28230.892 - 28432.542: 99.5771% ( 5) 00:08:03.192 28432.542 - 28634.191: 99.6416% ( 9) 00:08:03.192 28634.191 - 28835.840: 99.6990% ( 8) 00:08:03.192 28835.840 - 29037.489: 99.7563% ( 8) 00:08:03.192 29037.489 - 29239.138: 99.8136% ( 8) 00:08:03.192 29239.138 - 29440.788: 99.8638% ( 7) 00:08:03.192 29440.788 - 29642.437: 99.9283% ( 9) 00:08:03.192 29642.437 - 29844.086: 99.9857% ( 8) 00:08:03.192 29844.086 - 30045.735: 100.0000% ( 2) 00:08:03.192 00:08:03.192 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:03.192 ============================================================================== 00:08:03.192 Range in us Cumulative IO count 00:08:03.192 7108.135 - 7158.548: 0.0143% ( 2) 00:08:03.192 7158.548 - 7208.960: 0.0573% ( 6) 00:08:03.192 7208.960 - 7259.372: 0.1720% ( 16) 00:08:03.192 7259.372 - 7309.785: 0.2939% ( 17) 00:08:03.192 7309.785 - 7360.197: 0.3584% ( 9) 00:08:03.192 7360.197 - 7410.609: 0.4229% ( 9) 00:08:03.192 7410.609 - 7461.022: 0.5161% ( 13) 00:08:03.192 7461.022 - 7511.434: 0.6307% ( 16) 00:08:03.192 7511.434 - 7561.846: 0.7741% ( 20) 00:08:03.192 7561.846 - 7612.258: 0.9819% ( 29) 00:08:03.192 7612.258 - 7662.671: 1.3260% ( 48) 00:08:03.192 7662.671 - 7713.083: 1.9639% ( 89) 00:08:03.192 7713.083 - 7763.495: 2.6376% ( 94) 00:08:03.192 7763.495 - 7813.908: 3.6769% ( 145) 00:08:03.192 7813.908 - 7864.320: 4.6517% ( 136) 00:08:03.192 7864.320 - 7914.732: 6.1138% ( 204) 00:08:03.192 7914.732 - 7965.145: 7.7552% ( 229) 00:08:03.192 7965.145 - 8015.557: 9.5470% ( 250) 00:08:03.192 8015.557 - 8065.969: 11.7689% ( 310) 00:08:03.192 8065.969 - 8116.382: 14.1198% ( 328) 00:08:03.192 8116.382 - 8166.794: 16.6069% ( 347) 00:08:03.192 8166.794 - 8217.206: 19.1370% ( 353) 00:08:03.192 8217.206 - 8267.618: 22.1187% ( 416) 00:08:03.192 8267.618 - 8318.031: 24.8997% ( 388) 00:08:03.192 8318.031 - 8368.443: 28.0677% ( 442) 00:08:03.192 8368.443 - 8418.855: 31.1138% ( 425) 00:08:03.192 8418.855 - 8469.268: 34.7190% ( 503) 00:08:03.192 8469.268 - 8519.680: 38.3529% ( 507) 00:08:03.192 8519.680 - 8570.092: 41.7790% ( 478) 00:08:03.192 8570.092 - 8620.505: 45.1907% ( 476) 00:08:03.192 8620.505 - 8670.917: 48.6024% ( 476) 00:08:03.192 8670.917 - 8721.329: 51.6055% ( 419) 00:08:03.192 8721.329 - 8771.742: 54.2575% ( 370) 00:08:03.192 8771.742 - 8822.154: 57.0097% ( 384) 00:08:03.192 8822.154 - 8872.566: 59.6545% ( 369) 00:08:03.192 8872.566 - 8922.978: 62.3065% ( 370) 00:08:03.192 8922.978 - 8973.391: 64.7434% ( 340) 00:08:03.192 8973.391 - 9023.803: 67.0943% ( 328) 00:08:03.192 9023.803 - 9074.215: 69.2015% ( 294) 00:08:03.192 9074.215 - 9124.628: 71.1511% ( 272) 00:08:03.192 9124.628 - 9175.040: 72.7279% ( 220) 00:08:03.192 9175.040 - 9225.452: 74.1972% ( 205) 00:08:03.192 9225.452 - 9275.865: 75.5877% ( 194) 00:08:03.192 9275.865 - 9326.277: 76.9854% ( 195) 00:08:03.192 9326.277 - 9376.689: 77.9960% ( 141) 00:08:03.192 9376.689 - 9427.102: 79.1643% ( 163) 00:08:03.192 9427.102 - 9477.514: 80.1749% ( 141) 00:08:03.192 9477.514 - 9527.926: 81.1640% ( 138) 00:08:03.192 9527.926 - 9578.338: 82.0384% ( 122) 00:08:03.192 9578.338 - 9628.751: 82.7337% ( 97) 00:08:03.192 9628.751 - 9679.163: 83.4719% ( 103) 00:08:03.192 9679.163 - 9729.575: 83.9808% ( 71) 00:08:03.192 9729.575 - 9779.988: 84.3750% ( 55) 00:08:03.192 9779.988 - 9830.400: 84.7835% ( 57) 00:08:03.192 9830.400 - 9880.812: 85.1849% ( 56) 00:08:03.192 9880.812 - 9931.225: 85.5791% ( 55) 00:08:03.192 9931.225 - 9981.637: 85.8945% ( 44) 00:08:03.192 9981.637 - 10032.049: 86.1884% ( 41) 00:08:03.192 10032.049 - 10082.462: 86.4607% ( 38) 00:08:03.192 10082.462 - 10132.874: 86.7761% ( 44) 00:08:03.192 10132.874 - 10183.286: 87.0341% ( 36) 00:08:03.192 10183.286 - 10233.698: 87.3136% ( 39) 00:08:03.192 10233.698 - 10284.111: 87.6218% ( 43) 00:08:03.192 10284.111 - 10334.523: 87.8584% ( 33) 00:08:03.192 10334.523 - 10384.935: 88.1236% ( 37) 00:08:03.192 10384.935 - 10435.348: 88.4318% ( 43) 00:08:03.192 10435.348 - 10485.760: 88.7471% ( 44) 00:08:03.192 10485.760 - 10536.172: 89.0625% ( 44) 00:08:03.192 10536.172 - 10586.585: 89.3134% ( 35) 00:08:03.192 10586.585 - 10636.997: 89.5642% ( 35) 00:08:03.192 10636.997 - 10687.409: 89.8151% ( 35) 00:08:03.192 10687.409 - 10737.822: 90.1018% ( 40) 00:08:03.192 10737.822 - 10788.234: 90.3956% ( 41) 00:08:03.192 10788.234 - 10838.646: 90.6250% ( 32) 00:08:03.192 10838.646 - 10889.058: 90.9045% ( 39) 00:08:03.192 10889.058 - 10939.471: 91.1411% ( 33) 00:08:03.192 10939.471 - 10989.883: 91.2916% ( 21) 00:08:03.192 10989.883 - 11040.295: 91.4851% ( 27) 00:08:03.192 11040.295 - 11090.708: 91.7575% ( 38) 00:08:03.192 11090.708 - 11141.120: 91.9366% ( 25) 00:08:03.192 11141.120 - 11191.532: 92.1230% ( 26) 00:08:03.192 11191.532 - 11241.945: 92.3452% ( 31) 00:08:03.192 11241.945 - 11292.357: 92.4957% ( 21) 00:08:03.192 11292.357 - 11342.769: 92.6749% ( 25) 00:08:03.192 11342.769 - 11393.182: 92.8469% ( 24) 00:08:03.192 11393.182 - 11443.594: 93.0476% ( 28) 00:08:03.192 11443.594 - 11494.006: 93.3415% ( 41) 00:08:03.192 11494.006 - 11544.418: 93.5493% ( 29) 00:08:03.192 11544.418 - 11594.831: 93.8002% ( 35) 00:08:03.192 11594.831 - 11645.243: 94.0009% ( 28) 00:08:03.192 11645.243 - 11695.655: 94.1657% ( 23) 00:08:03.192 11695.655 - 11746.068: 94.3234% ( 22) 00:08:03.192 11746.068 - 11796.480: 94.4739% ( 21) 00:08:03.192 11796.480 - 11846.892: 94.6531% ( 25) 00:08:03.192 11846.892 - 11897.305: 94.8610% ( 29) 00:08:03.192 11897.305 - 11947.717: 94.9971% ( 19) 00:08:03.192 11947.717 - 11998.129: 95.1118% ( 16) 00:08:03.192 11998.129 - 12048.542: 95.2265% ( 16) 00:08:03.192 12048.542 - 12098.954: 95.3483% ( 17) 00:08:03.192 12098.954 - 12149.366: 95.4917% ( 20) 00:08:03.192 12149.366 - 12199.778: 95.6135% ( 17) 00:08:03.192 12199.778 - 12250.191: 95.7139% ( 14) 00:08:03.192 12250.191 - 12300.603: 95.7927% ( 11) 00:08:03.192 12300.603 - 12351.015: 95.9002% ( 15) 00:08:03.192 12351.015 - 12401.428: 96.0149% ( 16) 00:08:03.192 12401.428 - 12451.840: 96.1583% ( 20) 00:08:03.192 12451.840 - 12502.252: 96.2586% ( 14) 00:08:03.192 12502.252 - 12552.665: 96.3733% ( 16) 00:08:03.192 12552.665 - 12603.077: 96.5095% ( 19) 00:08:03.192 12603.077 - 12653.489: 96.6313% ( 17) 00:08:03.192 12653.489 - 12703.902: 96.7030% ( 10) 00:08:03.192 12703.902 - 12754.314: 96.7675% ( 9) 00:08:03.192 12754.314 - 12804.726: 96.8177% ( 7) 00:08:03.192 12804.726 - 12855.138: 96.8750% ( 8) 00:08:03.192 12855.138 - 12905.551: 96.9395% ( 9) 00:08:03.192 12905.551 - 13006.375: 97.0112% ( 10) 00:08:03.192 13006.375 - 13107.200: 97.1402% ( 18) 00:08:03.192 13107.200 - 13208.025: 97.3481% ( 29) 00:08:03.192 13208.025 - 13308.849: 97.7136% ( 51) 00:08:03.192 13308.849 - 13409.674: 97.9644% ( 35) 00:08:03.192 13409.674 - 13510.498: 98.2225% ( 36) 00:08:03.192 13510.498 - 13611.323: 98.3587% ( 19) 00:08:03.192 13611.323 - 13712.148: 98.4733% ( 16) 00:08:03.192 13712.148 - 13812.972: 98.5665% ( 13) 00:08:03.192 13812.972 - 13913.797: 98.6167% ( 7) 00:08:03.192 13913.797 - 14014.622: 98.6239% ( 1) 00:08:03.192 14619.569 - 14720.394: 98.6382% ( 2) 00:08:03.192 14720.394 - 14821.218: 98.7744% ( 19) 00:08:03.192 14821.218 - 14922.043: 98.8460% ( 10) 00:08:03.192 14922.043 - 15022.868: 98.8675% ( 3) 00:08:03.192 15022.868 - 15123.692: 98.8962% ( 4) 00:08:03.192 15123.692 - 15224.517: 98.9106% ( 2) 00:08:03.192 15224.517 - 15325.342: 98.9392% ( 4) 00:08:03.192 15325.342 - 15426.166: 98.9751% ( 5) 00:08:03.192 15426.166 - 15526.991: 99.0181% ( 6) 00:08:03.192 15526.991 - 15627.815: 99.0467% ( 4) 00:08:03.192 15627.815 - 15728.640: 99.0754% ( 4) 00:08:03.192 15728.640 - 15829.465: 99.0826% ( 1) 00:08:03.193 21173.169 - 21273.994: 99.1112% ( 4) 00:08:03.193 21273.994 - 21374.818: 99.1399% ( 4) 00:08:03.193 21374.818 - 21475.643: 99.1686% ( 4) 00:08:03.193 21475.643 - 21576.468: 99.1972% ( 4) 00:08:03.193 21576.468 - 21677.292: 99.2259% ( 4) 00:08:03.193 21677.292 - 21778.117: 99.2618% ( 5) 00:08:03.193 21778.117 - 21878.942: 99.2904% ( 4) 00:08:03.193 21878.942 - 21979.766: 99.3191% ( 4) 00:08:03.193 21979.766 - 22080.591: 99.3478% ( 4) 00:08:03.193 22080.591 - 22181.415: 99.3836% ( 5) 00:08:03.193 22181.415 - 22282.240: 99.4123% ( 4) 00:08:03.193 22282.240 - 22383.065: 99.4409% ( 4) 00:08:03.193 22383.065 - 22483.889: 99.4696% ( 4) 00:08:03.193 22483.889 - 22584.714: 99.4983% ( 4) 00:08:03.193 22584.714 - 22685.538: 99.5341% ( 5) 00:08:03.193 22685.538 - 22786.363: 99.5413% ( 1) 00:08:03.193 26617.698 - 26819.348: 99.5771% ( 5) 00:08:03.193 26819.348 - 27020.997: 99.6416% ( 9) 00:08:03.193 27020.997 - 27222.646: 99.6918% ( 7) 00:08:03.193 27222.646 - 27424.295: 99.7563% ( 9) 00:08:03.193 27424.295 - 27625.945: 99.8136% ( 8) 00:08:03.193 27625.945 - 27827.594: 99.8710% ( 8) 00:08:03.193 27827.594 - 28029.243: 99.9283% ( 8) 00:08:03.193 28029.243 - 28230.892: 99.9928% ( 9) 00:08:03.193 28230.892 - 28432.542: 100.0000% ( 1) 00:08:03.193 00:08:03.193 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:03.193 ============================================================================== 00:08:03.193 Range in us Cumulative IO count 00:08:03.193 7259.372 - 7309.785: 0.0287% ( 4) 00:08:03.193 7309.785 - 7360.197: 0.1075% ( 11) 00:08:03.193 7360.197 - 7410.609: 0.1792% ( 10) 00:08:03.193 7410.609 - 7461.022: 0.2294% ( 7) 00:08:03.193 7461.022 - 7511.434: 0.3369% ( 15) 00:08:03.193 7511.434 - 7561.846: 0.4444% ( 15) 00:08:03.193 7561.846 - 7612.258: 0.6379% ( 27) 00:08:03.193 7612.258 - 7662.671: 0.9748% ( 47) 00:08:03.193 7662.671 - 7713.083: 1.4407% ( 65) 00:08:03.193 7713.083 - 7763.495: 2.1646% ( 101) 00:08:03.193 7763.495 - 7813.908: 3.1967% ( 144) 00:08:03.193 7813.908 - 7864.320: 4.5155% ( 184) 00:08:03.193 7864.320 - 7914.732: 5.9346% ( 198) 00:08:03.193 7914.732 - 7965.145: 7.7910% ( 259) 00:08:03.193 7965.145 - 8015.557: 10.1061% ( 323) 00:08:03.193 8015.557 - 8065.969: 12.3423% ( 312) 00:08:03.193 8065.969 - 8116.382: 14.7577% ( 337) 00:08:03.193 8116.382 - 8166.794: 17.2878% ( 353) 00:08:03.193 8166.794 - 8217.206: 19.7893% ( 349) 00:08:03.193 8217.206 - 8267.618: 22.1689% ( 332) 00:08:03.193 8267.618 - 8318.031: 24.9498% ( 388) 00:08:03.193 8318.031 - 8368.443: 28.0318% ( 430) 00:08:03.193 8368.443 - 8418.855: 31.0636% ( 423) 00:08:03.193 8418.855 - 8469.268: 34.4252% ( 469) 00:08:03.193 8469.268 - 8519.680: 37.8584% ( 479) 00:08:03.193 8519.680 - 8570.092: 41.8506% ( 557) 00:08:03.193 8570.092 - 8620.505: 45.6780% ( 534) 00:08:03.193 8620.505 - 8670.917: 48.8030% ( 436) 00:08:03.193 8670.917 - 8721.329: 51.6198% ( 393) 00:08:03.193 8721.329 - 8771.742: 54.7592% ( 438) 00:08:03.193 8771.742 - 8822.154: 57.7480% ( 417) 00:08:03.193 8822.154 - 8872.566: 60.3928% ( 369) 00:08:03.193 8872.566 - 8922.978: 62.6577% ( 316) 00:08:03.193 8922.978 - 8973.391: 64.9441% ( 319) 00:08:03.193 8973.391 - 9023.803: 67.1158% ( 303) 00:08:03.193 9023.803 - 9074.215: 69.0510% ( 270) 00:08:03.193 9074.215 - 9124.628: 70.6995% ( 230) 00:08:03.193 9124.628 - 9175.040: 72.5487% ( 258) 00:08:03.193 9175.040 - 9225.452: 74.0181% ( 205) 00:08:03.193 9225.452 - 9275.865: 75.3297% ( 183) 00:08:03.193 9275.865 - 9326.277: 76.7417% ( 197) 00:08:03.193 9326.277 - 9376.689: 78.0677% ( 185) 00:08:03.193 9376.689 - 9427.102: 79.0496% ( 137) 00:08:03.193 9427.102 - 9477.514: 80.0100% ( 134) 00:08:03.193 9477.514 - 9527.926: 81.0493% ( 145) 00:08:03.193 9527.926 - 9578.338: 81.8377% ( 110) 00:08:03.193 9578.338 - 9628.751: 82.4756% ( 89) 00:08:03.193 9628.751 - 9679.163: 83.0562% ( 81) 00:08:03.193 9679.163 - 9729.575: 83.6081% ( 77) 00:08:03.193 9729.575 - 9779.988: 84.1385% ( 74) 00:08:03.193 9779.988 - 9830.400: 84.6115% ( 66) 00:08:03.193 9830.400 - 9880.812: 84.9986% ( 54) 00:08:03.193 9880.812 - 9931.225: 85.4071% ( 57) 00:08:03.193 9931.225 - 9981.637: 85.6723% ( 37) 00:08:03.193 9981.637 - 10032.049: 85.9232% ( 35) 00:08:03.193 10032.049 - 10082.462: 86.1955% ( 38) 00:08:03.193 10082.462 - 10132.874: 86.4034% ( 29) 00:08:03.193 10132.874 - 10183.286: 86.5969% ( 27) 00:08:03.193 10183.286 - 10233.698: 86.9123% ( 44) 00:08:03.193 10233.698 - 10284.111: 87.1775% ( 37) 00:08:03.193 10284.111 - 10334.523: 87.3997% ( 31) 00:08:03.193 10334.523 - 10384.935: 87.6864% ( 40) 00:08:03.193 10384.935 - 10435.348: 87.9874% ( 42) 00:08:03.193 10435.348 - 10485.760: 88.4676% ( 67) 00:08:03.193 10485.760 - 10536.172: 88.7185% ( 35) 00:08:03.193 10536.172 - 10586.585: 88.9478% ( 32) 00:08:03.193 10586.585 - 10636.997: 89.2345% ( 40) 00:08:03.193 10636.997 - 10687.409: 89.5786% ( 48) 00:08:03.193 10687.409 - 10737.822: 89.9369% ( 50) 00:08:03.193 10737.822 - 10788.234: 90.2953% ( 50) 00:08:03.193 10788.234 - 10838.646: 90.5462% ( 35) 00:08:03.193 10838.646 - 10889.058: 90.8615% ( 44) 00:08:03.193 10889.058 - 10939.471: 91.2414% ( 53) 00:08:03.193 10939.471 - 10989.883: 91.4923% ( 35) 00:08:03.193 10989.883 - 11040.295: 91.7503% ( 36) 00:08:03.193 11040.295 - 11090.708: 92.0155% ( 37) 00:08:03.193 11090.708 - 11141.120: 92.3524% ( 47) 00:08:03.193 11141.120 - 11191.532: 92.6032% ( 35) 00:08:03.193 11191.532 - 11241.945: 92.8397% ( 33) 00:08:03.193 11241.945 - 11292.357: 93.0261% ( 26) 00:08:03.193 11292.357 - 11342.769: 93.2411% ( 30) 00:08:03.193 11342.769 - 11393.182: 93.4705% ( 32) 00:08:03.193 11393.182 - 11443.594: 93.7643% ( 41) 00:08:03.193 11443.594 - 11494.006: 93.9722% ( 29) 00:08:03.193 11494.006 - 11544.418: 94.1585% ( 26) 00:08:03.193 11544.418 - 11594.831: 94.3377% ( 25) 00:08:03.193 11594.831 - 11645.243: 94.5097% ( 24) 00:08:03.193 11645.243 - 11695.655: 94.7033% ( 27) 00:08:03.193 11695.655 - 11746.068: 94.8251% ( 17) 00:08:03.193 11746.068 - 11796.480: 94.9326% ( 15) 00:08:03.193 11796.480 - 11846.892: 95.0330% ( 14) 00:08:03.193 11846.892 - 11897.305: 95.1190% ( 12) 00:08:03.193 11897.305 - 11947.717: 95.2122% ( 13) 00:08:03.193 11947.717 - 11998.129: 95.2838% ( 10) 00:08:03.193 11998.129 - 12048.542: 95.3913% ( 15) 00:08:03.193 12048.542 - 12098.954: 95.4917% ( 14) 00:08:03.193 12098.954 - 12149.366: 95.6207% ( 18) 00:08:03.193 12149.366 - 12199.778: 95.7139% ( 13) 00:08:03.193 12199.778 - 12250.191: 95.7999% ( 12) 00:08:03.193 12250.191 - 12300.603: 95.9002% ( 14) 00:08:03.193 12300.603 - 12351.015: 95.9647% ( 9) 00:08:03.193 12351.015 - 12401.428: 96.0436% ( 11) 00:08:03.193 12401.428 - 12451.840: 96.1224% ( 11) 00:08:03.193 12451.840 - 12502.252: 96.2371% ( 16) 00:08:03.193 12502.252 - 12552.665: 96.3159% ( 11) 00:08:03.193 12552.665 - 12603.077: 96.4091% ( 13) 00:08:03.193 12603.077 - 12653.489: 96.4808% ( 10) 00:08:03.193 12653.489 - 12703.902: 96.5883% ( 15) 00:08:03.193 12703.902 - 12754.314: 96.6313% ( 6) 00:08:03.193 12754.314 - 12804.726: 96.6815% ( 7) 00:08:03.193 12804.726 - 12855.138: 96.7101% ( 4) 00:08:03.193 12855.138 - 12905.551: 96.7317% ( 3) 00:08:03.193 12905.551 - 13006.375: 96.8607% ( 18) 00:08:03.193 13006.375 - 13107.200: 96.9753% ( 16) 00:08:03.193 13107.200 - 13208.025: 97.1330% ( 22) 00:08:03.193 13208.025 - 13308.849: 97.3050% ( 24) 00:08:03.193 13308.849 - 13409.674: 97.4341% ( 18) 00:08:03.193 13409.674 - 13510.498: 97.5416% ( 15) 00:08:03.193 13510.498 - 13611.323: 97.6491% ( 15) 00:08:03.193 13611.323 - 13712.148: 97.7566% ( 15) 00:08:03.193 13712.148 - 13812.972: 97.9644% ( 29) 00:08:03.193 13812.972 - 13913.797: 98.3013% ( 47) 00:08:03.193 13913.797 - 14014.622: 98.4590% ( 22) 00:08:03.193 14014.622 - 14115.446: 98.5450% ( 12) 00:08:03.193 14115.446 - 14216.271: 98.6669% ( 17) 00:08:03.193 14216.271 - 14317.095: 98.7959% ( 18) 00:08:03.193 14317.095 - 14417.920: 98.9249% ( 18) 00:08:03.193 14417.920 - 14518.745: 98.9536% ( 4) 00:08:03.193 14518.745 - 14619.569: 98.9679% ( 2) 00:08:03.193 14619.569 - 14720.394: 98.9894% ( 3) 00:08:03.194 14720.394 - 14821.218: 99.0109% ( 3) 00:08:03.194 14821.218 - 14922.043: 99.0396% ( 4) 00:08:03.194 14922.043 - 15022.868: 99.0539% ( 2) 00:08:03.194 15022.868 - 15123.692: 99.0754% ( 3) 00:08:03.194 15123.692 - 15224.517: 99.0826% ( 1) 00:08:03.194 19459.151 - 19559.975: 99.0897% ( 1) 00:08:03.194 19559.975 - 19660.800: 99.1184% ( 4) 00:08:03.194 19660.800 - 19761.625: 99.1471% ( 4) 00:08:03.194 19761.625 - 19862.449: 99.1757% ( 4) 00:08:03.194 19862.449 - 19963.274: 99.2044% ( 4) 00:08:03.194 19963.274 - 20064.098: 99.2331% ( 4) 00:08:03.194 20064.098 - 20164.923: 99.2618% ( 4) 00:08:03.194 20164.923 - 20265.748: 99.2904% ( 4) 00:08:03.194 20265.748 - 20366.572: 99.3263% ( 5) 00:08:03.194 20366.572 - 20467.397: 99.3549% ( 4) 00:08:03.194 20467.397 - 20568.222: 99.3836% ( 4) 00:08:03.194 20568.222 - 20669.046: 99.4123% ( 4) 00:08:03.194 20669.046 - 20769.871: 99.4481% ( 5) 00:08:03.194 20769.871 - 20870.695: 99.4768% ( 4) 00:08:03.194 20870.695 - 20971.520: 99.5054% ( 4) 00:08:03.194 20971.520 - 21072.345: 99.5413% ( 5) 00:08:03.194 25004.505 - 25105.329: 99.5628% ( 3) 00:08:03.194 25105.329 - 25206.154: 99.5986% ( 5) 00:08:03.194 25206.154 - 25306.978: 99.6273% ( 4) 00:08:03.194 25306.978 - 25407.803: 99.6560% ( 4) 00:08:03.194 25407.803 - 25508.628: 99.6775% ( 3) 00:08:03.194 25508.628 - 25609.452: 99.7061% ( 4) 00:08:03.194 25609.452 - 25710.277: 99.7348% ( 4) 00:08:03.194 25710.277 - 25811.102: 99.7706% ( 5) 00:08:03.194 25811.102 - 26012.751: 99.8280% ( 8) 00:08:03.194 26012.751 - 26214.400: 99.8925% ( 9) 00:08:03.194 26214.400 - 26416.049: 99.9498% ( 8) 00:08:03.194 26416.049 - 26617.698: 100.0000% ( 7) 00:08:03.194 00:08:03.194 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:03.194 ============================================================================== 00:08:03.194 Range in us Cumulative IO count 00:08:03.194 7309.785 - 7360.197: 0.0071% ( 1) 00:08:03.194 7360.197 - 7410.609: 0.0285% ( 3) 00:08:03.194 7410.609 - 7461.022: 0.0999% ( 10) 00:08:03.194 7461.022 - 7511.434: 0.2354% ( 19) 00:08:03.194 7511.434 - 7561.846: 0.3995% ( 23) 00:08:03.194 7561.846 - 7612.258: 0.6564% ( 36) 00:08:03.194 7612.258 - 7662.671: 1.0488% ( 55) 00:08:03.194 7662.671 - 7713.083: 1.6053% ( 78) 00:08:03.194 7713.083 - 7763.495: 2.3901% ( 110) 00:08:03.194 7763.495 - 7813.908: 3.2534% ( 121) 00:08:03.194 7813.908 - 7864.320: 4.7945% ( 216) 00:08:03.194 7864.320 - 7914.732: 6.4355% ( 230) 00:08:03.194 7914.732 - 7965.145: 8.2691% ( 257) 00:08:03.194 7965.145 - 8015.557: 10.0385% ( 248) 00:08:03.194 8015.557 - 8065.969: 12.2860% ( 315) 00:08:03.194 8065.969 - 8116.382: 14.6689% ( 334) 00:08:03.194 8116.382 - 8166.794: 16.9521% ( 320) 00:08:03.194 8166.794 - 8217.206: 19.6490% ( 378) 00:08:03.194 8217.206 - 8267.618: 22.2745% ( 368) 00:08:03.194 8267.618 - 8318.031: 24.9857% ( 380) 00:08:03.194 8318.031 - 8368.443: 28.5388% ( 498) 00:08:03.194 8368.443 - 8418.855: 31.8850% ( 469) 00:08:03.194 8418.855 - 8469.268: 35.3596% ( 487) 00:08:03.194 8469.268 - 8519.680: 38.6701% ( 464) 00:08:03.194 8519.680 - 8570.092: 42.2731% ( 505) 00:08:03.194 8570.092 - 8620.505: 45.7477% ( 487) 00:08:03.194 8620.505 - 8670.917: 48.7942% ( 427) 00:08:03.194 8670.917 - 8721.329: 52.3116% ( 493) 00:08:03.194 8721.329 - 8771.742: 55.2012% ( 405) 00:08:03.194 8771.742 - 8822.154: 57.4772% ( 319) 00:08:03.194 8822.154 - 8872.566: 60.0742% ( 364) 00:08:03.194 8872.566 - 8922.978: 62.7212% ( 371) 00:08:03.194 8922.978 - 8973.391: 64.9115% ( 307) 00:08:03.194 8973.391 - 9023.803: 66.9735% ( 289) 00:08:03.194 9023.803 - 9074.215: 68.8071% ( 257) 00:08:03.194 9074.215 - 9124.628: 70.8547% ( 287) 00:08:03.194 9124.628 - 9175.040: 72.2032% ( 189) 00:08:03.194 9175.040 - 9225.452: 73.7372% ( 215) 00:08:03.194 9225.452 - 9275.865: 75.2069% ( 206) 00:08:03.194 9275.865 - 9326.277: 76.4127% ( 169) 00:08:03.194 9326.277 - 9376.689: 77.6113% ( 168) 00:08:03.194 9376.689 - 9427.102: 78.7457% ( 159) 00:08:03.194 9427.102 - 9477.514: 79.7588% ( 142) 00:08:03.194 9477.514 - 9527.926: 80.7720% ( 142) 00:08:03.194 9527.926 - 9578.338: 81.4284% ( 92) 00:08:03.194 9578.338 - 9628.751: 82.0277% ( 84) 00:08:03.194 9628.751 - 9679.163: 82.4772% ( 63) 00:08:03.194 9679.163 - 9729.575: 82.9195% ( 62) 00:08:03.194 9729.575 - 9779.988: 83.4475% ( 74) 00:08:03.194 9779.988 - 9830.400: 84.0325% ( 82) 00:08:03.194 9830.400 - 9880.812: 84.5106% ( 67) 00:08:03.194 9880.812 - 9931.225: 84.9814% ( 66) 00:08:03.194 9931.225 - 9981.637: 85.2526% ( 38) 00:08:03.194 9981.637 - 10032.049: 85.6807% ( 60) 00:08:03.194 10032.049 - 10082.462: 86.1872% ( 71) 00:08:03.194 10082.462 - 10132.874: 86.6581% ( 66) 00:08:03.194 10132.874 - 10183.286: 86.9863% ( 46) 00:08:03.194 10183.286 - 10233.698: 87.2860% ( 42) 00:08:03.194 10233.698 - 10284.111: 87.5856% ( 42) 00:08:03.194 10284.111 - 10334.523: 87.9424% ( 50) 00:08:03.194 10334.523 - 10384.935: 88.2277% ( 40) 00:08:03.194 10384.935 - 10435.348: 88.5417% ( 44) 00:08:03.194 10435.348 - 10485.760: 88.8699% ( 46) 00:08:03.194 10485.760 - 10536.172: 89.1838% ( 44) 00:08:03.194 10536.172 - 10586.585: 89.4620% ( 39) 00:08:03.194 10586.585 - 10636.997: 89.7260% ( 37) 00:08:03.194 10636.997 - 10687.409: 90.0471% ( 45) 00:08:03.194 10687.409 - 10737.822: 90.2897% ( 34) 00:08:03.194 10737.822 - 10788.234: 90.5893% ( 42) 00:08:03.194 10788.234 - 10838.646: 90.8604% ( 38) 00:08:03.194 10838.646 - 10889.058: 91.1102% ( 35) 00:08:03.194 10889.058 - 10939.471: 91.4098% ( 42) 00:08:03.194 10939.471 - 10989.883: 91.8165% ( 57) 00:08:03.194 10989.883 - 11040.295: 92.0377% ( 31) 00:08:03.194 11040.295 - 11090.708: 92.2945% ( 36) 00:08:03.194 11090.708 - 11141.120: 92.5585% ( 37) 00:08:03.194 11141.120 - 11191.532: 92.9295% ( 52) 00:08:03.194 11191.532 - 11241.945: 93.1792% ( 35) 00:08:03.194 11241.945 - 11292.357: 93.3933% ( 30) 00:08:03.194 11292.357 - 11342.769: 93.5431% ( 21) 00:08:03.194 11342.769 - 11393.182: 93.7072% ( 23) 00:08:03.194 11393.182 - 11443.594: 93.8856% ( 25) 00:08:03.194 11443.594 - 11494.006: 94.0354% ( 21) 00:08:03.194 11494.006 - 11544.418: 94.1924% ( 22) 00:08:03.194 11544.418 - 11594.831: 94.3065% ( 16) 00:08:03.194 11594.831 - 11645.243: 94.4207% ( 16) 00:08:03.194 11645.243 - 11695.655: 94.5562% ( 19) 00:08:03.194 11695.655 - 11746.068: 94.6846% ( 18) 00:08:03.194 11746.068 - 11796.480: 94.8131% ( 18) 00:08:03.194 11796.480 - 11846.892: 94.9272% ( 16) 00:08:03.194 11846.892 - 11897.305: 95.0128% ( 12) 00:08:03.194 11897.305 - 11947.717: 95.1056% ( 13) 00:08:03.194 11947.717 - 11998.129: 95.1769% ( 10) 00:08:03.194 11998.129 - 12048.542: 95.3696% ( 27) 00:08:03.194 12048.542 - 12098.954: 95.4552% ( 12) 00:08:03.194 12098.954 - 12149.366: 95.4980% ( 6) 00:08:03.194 12149.366 - 12199.778: 95.5551% ( 8) 00:08:03.194 12199.778 - 12250.191: 95.5908% ( 5) 00:08:03.194 12250.191 - 12300.603: 95.6764% ( 12) 00:08:03.194 12300.603 - 12351.015: 95.7477% ( 10) 00:08:03.194 12351.015 - 12401.428: 95.8048% ( 8) 00:08:03.194 12401.428 - 12451.840: 95.8975% ( 13) 00:08:03.194 12451.840 - 12502.252: 96.0260% ( 18) 00:08:03.194 12502.252 - 12552.665: 96.1045% ( 11) 00:08:03.194 12552.665 - 12603.077: 96.1758% ( 10) 00:08:03.194 12603.077 - 12653.489: 96.2614% ( 12) 00:08:03.194 12653.489 - 12703.902: 96.3328% ( 10) 00:08:03.194 12703.902 - 12754.314: 96.4041% ( 10) 00:08:03.194 12754.314 - 12804.726: 96.5040% ( 14) 00:08:03.194 12804.726 - 12855.138: 96.5896% ( 12) 00:08:03.194 12855.138 - 12905.551: 96.6752% ( 12) 00:08:03.194 12905.551 - 13006.375: 96.8108% ( 19) 00:08:03.194 13006.375 - 13107.200: 96.9820% ( 24) 00:08:03.194 13107.200 - 13208.025: 97.1176% ( 19) 00:08:03.194 13208.025 - 13308.849: 97.2103% ( 13) 00:08:03.194 13308.849 - 13409.674: 97.3102% ( 14) 00:08:03.194 13409.674 - 13510.498: 97.4172% ( 15) 00:08:03.194 13510.498 - 13611.323: 97.5813% ( 23) 00:08:03.194 13611.323 - 13712.148: 97.9167% ( 47) 00:08:03.194 13712.148 - 13812.972: 98.1022% ( 26) 00:08:03.194 13812.972 - 13913.797: 98.3662% ( 37) 00:08:03.194 13913.797 - 14014.622: 98.5659% ( 28) 00:08:03.194 14014.622 - 14115.446: 98.7158% ( 21) 00:08:03.194 14115.446 - 14216.271: 98.7942% ( 11) 00:08:03.194 14216.271 - 14317.095: 98.8941% ( 14) 00:08:03.194 14317.095 - 14417.920: 99.0083% ( 16) 00:08:03.194 14417.920 - 14518.745: 99.1367% ( 18) 00:08:03.194 14518.745 - 14619.569: 99.2437% ( 15) 00:08:03.194 14619.569 - 14720.394: 99.3365% ( 13) 00:08:03.194 14720.394 - 14821.218: 99.4007% ( 9) 00:08:03.194 14821.218 - 14922.043: 99.4364% ( 5) 00:08:03.194 14922.043 - 15022.868: 99.4649% ( 4) 00:08:03.194 15022.868 - 15123.692: 99.5006% ( 5) 00:08:03.194 15123.692 - 15224.517: 99.5291% ( 4) 00:08:03.194 15224.517 - 15325.342: 99.5434% ( 2) 00:08:03.194 18955.028 - 19055.852: 99.5576% ( 2) 00:08:03.194 19055.852 - 19156.677: 99.5862% ( 4) 00:08:03.194 19156.677 - 19257.502: 99.6147% ( 4) 00:08:03.194 19257.502 - 19358.326: 99.6504% ( 5) 00:08:03.194 19358.326 - 19459.151: 99.6789% ( 4) 00:08:03.194 19459.151 - 19559.975: 99.7075% ( 4) 00:08:03.194 19559.975 - 19660.800: 99.7360% ( 4) 00:08:03.194 19660.800 - 19761.625: 99.7646% ( 4) 00:08:03.194 19761.625 - 19862.449: 99.7931% ( 4) 00:08:03.194 19862.449 - 19963.274: 99.8216% ( 4) 00:08:03.195 19963.274 - 20064.098: 99.8502% ( 4) 00:08:03.195 20064.098 - 20164.923: 99.8787% ( 4) 00:08:03.195 20164.923 - 20265.748: 99.9144% ( 5) 00:08:03.195 20265.748 - 20366.572: 99.9429% ( 4) 00:08:03.195 20366.572 - 20467.397: 99.9715% ( 4) 00:08:03.195 20467.397 - 20568.222: 100.0000% ( 4) 00:08:03.195 00:08:03.195 21:39:21 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:03.195 00:08:03.195 real 0m2.507s 00:08:03.195 user 0m2.197s 00:08:03.195 sys 0m0.202s 00:08:03.195 21:39:21 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.195 21:39:21 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:03.195 ************************************ 00:08:03.195 END TEST nvme_perf 00:08:03.195 ************************************ 00:08:03.195 21:39:21 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:03.195 21:39:21 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:03.195 21:39:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.195 21:39:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.195 ************************************ 00:08:03.195 START TEST nvme_hello_world 00:08:03.195 ************************************ 00:08:03.195 21:39:21 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:03.195 Initializing NVMe Controllers 00:08:03.195 Attached to 0000:00:10.0 00:08:03.195 Namespace ID: 1 size: 6GB 00:08:03.195 Attached to 0000:00:11.0 00:08:03.195 Namespace ID: 1 size: 5GB 00:08:03.195 Attached to 0000:00:13.0 00:08:03.195 Namespace ID: 1 size: 1GB 00:08:03.195 Attached to 0000:00:12.0 00:08:03.195 Namespace ID: 1 size: 4GB 00:08:03.195 Namespace ID: 2 size: 4GB 00:08:03.195 Namespace ID: 3 size: 4GB 00:08:03.195 Initialization complete. 00:08:03.195 INFO: using host memory buffer for IO 00:08:03.195 Hello world! 00:08:03.195 INFO: using host memory buffer for IO 00:08:03.195 Hello world! 00:08:03.195 INFO: using host memory buffer for IO 00:08:03.195 Hello world! 00:08:03.195 INFO: using host memory buffer for IO 00:08:03.195 Hello world! 00:08:03.195 INFO: using host memory buffer for IO 00:08:03.195 Hello world! 00:08:03.195 INFO: using host memory buffer for IO 00:08:03.195 Hello world! 00:08:03.195 00:08:03.195 real 0m0.203s 00:08:03.195 user 0m0.075s 00:08:03.195 sys 0m0.085s 00:08:03.195 21:39:22 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.195 21:39:22 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:03.195 ************************************ 00:08:03.195 END TEST nvme_hello_world 00:08:03.195 ************************************ 00:08:03.195 21:39:22 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:03.195 21:39:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.195 21:39:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.195 21:39:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.195 ************************************ 00:08:03.195 START TEST nvme_sgl 00:08:03.195 ************************************ 00:08:03.195 21:39:22 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:03.456 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:03.456 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:03.456 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:03.456 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:03.456 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:03.456 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:03.456 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:03.456 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:03.456 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:03.456 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:03.456 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:03.456 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:03.456 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:03.456 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:03.456 NVMe Readv/Writev Request test 00:08:03.456 Attached to 0000:00:10.0 00:08:03.456 Attached to 0000:00:11.0 00:08:03.456 Attached to 0000:00:13.0 00:08:03.456 Attached to 0000:00:12.0 00:08:03.456 0000:00:10.0: build_io_request_2 test passed 00:08:03.456 0000:00:10.0: build_io_request_4 test passed 00:08:03.456 0000:00:10.0: build_io_request_5 test passed 00:08:03.456 0000:00:10.0: build_io_request_6 test passed 00:08:03.456 0000:00:10.0: build_io_request_7 test passed 00:08:03.456 0000:00:10.0: build_io_request_10 test passed 00:08:03.456 0000:00:11.0: build_io_request_2 test passed 00:08:03.456 0000:00:11.0: build_io_request_4 test passed 00:08:03.456 0000:00:11.0: build_io_request_5 test passed 00:08:03.456 0000:00:11.0: build_io_request_6 test passed 00:08:03.456 0000:00:11.0: build_io_request_7 test passed 00:08:03.456 0000:00:11.0: build_io_request_10 test passed 00:08:03.456 Cleaning up... 00:08:03.456 00:08:03.456 real 0m0.275s 00:08:03.456 user 0m0.142s 00:08:03.456 sys 0m0.088s 00:08:03.456 21:39:22 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.456 ************************************ 00:08:03.456 END TEST nvme_sgl 00:08:03.456 ************************************ 00:08:03.456 21:39:22 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:03.456 21:39:22 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:03.456 21:39:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.456 21:39:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.456 21:39:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.456 ************************************ 00:08:03.456 START TEST nvme_e2edp 00:08:03.456 ************************************ 00:08:03.456 21:39:22 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:03.717 NVMe Write/Read with End-to-End data protection test 00:08:03.717 Attached to 0000:00:10.0 00:08:03.717 Attached to 0000:00:11.0 00:08:03.717 Attached to 0000:00:13.0 00:08:03.717 Attached to 0000:00:12.0 00:08:03.717 Cleaning up... 00:08:03.717 00:08:03.717 real 0m0.194s 00:08:03.717 user 0m0.060s 00:08:03.717 sys 0m0.090s 00:08:03.717 21:39:22 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.717 21:39:22 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:03.717 ************************************ 00:08:03.717 END TEST nvme_e2edp 00:08:03.717 ************************************ 00:08:03.717 21:39:22 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:03.717 21:39:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.717 21:39:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.717 21:39:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.717 ************************************ 00:08:03.717 START TEST nvme_reserve 00:08:03.717 ************************************ 00:08:03.717 21:39:22 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:03.977 ===================================================== 00:08:03.977 NVMe Controller at PCI bus 0, device 16, function 0 00:08:03.977 ===================================================== 00:08:03.977 Reservations: Not Supported 00:08:03.977 ===================================================== 00:08:03.977 NVMe Controller at PCI bus 0, device 17, function 0 00:08:03.977 ===================================================== 00:08:03.977 Reservations: Not Supported 00:08:03.977 ===================================================== 00:08:03.977 NVMe Controller at PCI bus 0, device 19, function 0 00:08:03.977 ===================================================== 00:08:03.977 Reservations: Not Supported 00:08:03.977 ===================================================== 00:08:03.977 NVMe Controller at PCI bus 0, device 18, function 0 00:08:03.977 ===================================================== 00:08:03.977 Reservations: Not Supported 00:08:03.977 Reservation test passed 00:08:03.977 00:08:03.977 real 0m0.193s 00:08:03.977 user 0m0.063s 00:08:03.977 sys 0m0.086s 00:08:03.977 21:39:22 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.977 21:39:22 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:03.977 ************************************ 00:08:03.977 END TEST nvme_reserve 00:08:03.977 ************************************ 00:08:03.977 21:39:22 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:03.977 21:39:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.977 21:39:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.977 21:39:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.977 ************************************ 00:08:03.977 START TEST nvme_err_injection 00:08:03.977 ************************************ 00:08:03.977 21:39:22 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:04.238 NVMe Error Injection test 00:08:04.238 Attached to 0000:00:10.0 00:08:04.238 Attached to 0000:00:11.0 00:08:04.238 Attached to 0000:00:13.0 00:08:04.238 Attached to 0000:00:12.0 00:08:04.238 0000:00:10.0: get features failed as expected 00:08:04.238 0000:00:11.0: get features failed as expected 00:08:04.238 0000:00:13.0: get features failed as expected 00:08:04.238 0000:00:12.0: get features failed as expected 00:08:04.238 0000:00:10.0: get features successfully as expected 00:08:04.238 0000:00:11.0: get features successfully as expected 00:08:04.238 0000:00:13.0: get features successfully as expected 00:08:04.238 0000:00:12.0: get features successfully as expected 00:08:04.238 0000:00:10.0: read failed as expected 00:08:04.238 0000:00:11.0: read failed as expected 00:08:04.238 0000:00:13.0: read failed as expected 00:08:04.238 0000:00:12.0: read failed as expected 00:08:04.238 0000:00:10.0: read successfully as expected 00:08:04.238 0000:00:11.0: read successfully as expected 00:08:04.238 0000:00:13.0: read successfully as expected 00:08:04.238 0000:00:12.0: read successfully as expected 00:08:04.238 Cleaning up... 00:08:04.238 00:08:04.238 real 0m0.212s 00:08:04.238 user 0m0.076s 00:08:04.238 sys 0m0.090s 00:08:04.238 21:39:23 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.238 21:39:23 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:04.238 ************************************ 00:08:04.238 END TEST nvme_err_injection 00:08:04.238 ************************************ 00:08:04.238 21:39:23 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:04.238 21:39:23 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:04.238 21:39:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.238 21:39:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.238 ************************************ 00:08:04.238 START TEST nvme_overhead 00:08:04.238 ************************************ 00:08:04.238 21:39:23 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:05.621 Initializing NVMe Controllers 00:08:05.621 Attached to 0000:00:10.0 00:08:05.621 Attached to 0000:00:11.0 00:08:05.621 Attached to 0000:00:13.0 00:08:05.621 Attached to 0000:00:12.0 00:08:05.621 Initialization complete. Launching workers. 00:08:05.621 submit (in ns) avg, min, max = 11605.0, 10197.7, 129673.8 00:08:05.621 complete (in ns) avg, min, max = 7848.1, 7216.2, 319124.6 00:08:05.621 00:08:05.621 Submit histogram 00:08:05.621 ================ 00:08:05.621 Range in us Cumulative Count 00:08:05.621 10.191 - 10.240: 0.0215% ( 2) 00:08:05.621 10.732 - 10.782: 0.0431% ( 2) 00:08:05.621 10.782 - 10.831: 0.0970% ( 5) 00:08:05.621 10.831 - 10.880: 0.2047% ( 10) 00:08:05.621 10.880 - 10.929: 0.4849% ( 26) 00:08:05.621 10.929 - 10.978: 1.6593% ( 109) 00:08:05.621 10.978 - 11.028: 5.4628% ( 353) 00:08:05.621 11.028 - 11.077: 12.4556% ( 649) 00:08:05.621 11.077 - 11.126: 22.0881% ( 894) 00:08:05.621 11.126 - 11.175: 33.8218% ( 1089) 00:08:05.621 11.175 - 11.225: 46.3959% ( 1167) 00:08:05.621 11.225 - 11.274: 57.0520% ( 989) 00:08:05.621 11.274 - 11.323: 65.5102% ( 785) 00:08:05.621 11.323 - 11.372: 71.6733% ( 572) 00:08:05.621 11.372 - 11.422: 75.7892% ( 382) 00:08:05.621 11.422 - 11.471: 78.2136% ( 225) 00:08:05.621 11.471 - 11.520: 79.9160% ( 158) 00:08:05.621 11.520 - 11.569: 81.0581% ( 106) 00:08:05.621 11.569 - 11.618: 82.2217% ( 108) 00:08:05.621 11.618 - 11.668: 83.4070% ( 110) 00:08:05.621 11.668 - 11.717: 84.8292% ( 132) 00:08:05.621 11.717 - 11.766: 86.2515% ( 132) 00:08:05.621 11.766 - 11.815: 87.7276% ( 137) 00:08:05.621 11.815 - 11.865: 89.2037% ( 137) 00:08:05.621 11.865 - 11.914: 90.6045% ( 130) 00:08:05.621 11.914 - 11.963: 91.5526% ( 88) 00:08:05.621 11.963 - 12.012: 92.3176% ( 71) 00:08:05.621 12.012 - 12.062: 92.7594% ( 41) 00:08:05.621 12.062 - 12.111: 93.0395% ( 26) 00:08:05.621 12.111 - 12.160: 93.2874% ( 23) 00:08:05.621 12.160 - 12.209: 93.4921% ( 19) 00:08:05.621 12.209 - 12.258: 93.6537% ( 15) 00:08:05.621 12.258 - 12.308: 93.7614% ( 10) 00:08:05.621 12.308 - 12.357: 93.9769% ( 20) 00:08:05.621 12.357 - 12.406: 94.0631% ( 8) 00:08:05.621 12.406 - 12.455: 94.1924% ( 12) 00:08:05.621 12.455 - 12.505: 94.2355% ( 4) 00:08:05.621 12.505 - 12.554: 94.2679% ( 3) 00:08:05.621 12.554 - 12.603: 94.3433% ( 7) 00:08:05.621 12.603 - 12.702: 94.4834% ( 13) 00:08:05.621 12.702 - 12.800: 94.5803% ( 9) 00:08:05.621 12.800 - 12.898: 94.6557% ( 7) 00:08:05.621 12.898 - 12.997: 94.7527% ( 9) 00:08:05.621 13.095 - 13.194: 94.7743% ( 2) 00:08:05.621 13.194 - 13.292: 94.8281% ( 5) 00:08:05.621 13.292 - 13.391: 94.9790% ( 14) 00:08:05.621 13.391 - 13.489: 95.0760% ( 9) 00:08:05.621 13.489 - 13.588: 95.2268% ( 14) 00:08:05.621 13.588 - 13.686: 95.3884% ( 15) 00:08:05.621 13.686 - 13.785: 95.5716% ( 17) 00:08:05.621 13.785 - 13.883: 95.7224% ( 14) 00:08:05.621 13.883 - 13.982: 95.9379% ( 20) 00:08:05.621 13.982 - 14.080: 96.1211% ( 17) 00:08:05.621 14.080 - 14.178: 96.2504% ( 12) 00:08:05.621 14.178 - 14.277: 96.3258% ( 7) 00:08:05.621 14.277 - 14.375: 96.3905% ( 6) 00:08:05.621 14.375 - 14.474: 96.4443% ( 5) 00:08:05.621 14.474 - 14.572: 96.4874% ( 4) 00:08:05.621 14.572 - 14.671: 96.5521% ( 6) 00:08:05.621 14.671 - 14.769: 96.5952% ( 4) 00:08:05.621 14.769 - 14.868: 96.6706% ( 7) 00:08:05.621 14.868 - 14.966: 96.6922% ( 2) 00:08:05.621 14.966 - 15.065: 96.7353% ( 4) 00:08:05.621 15.065 - 15.163: 96.7999% ( 6) 00:08:05.621 15.163 - 15.262: 96.8430% ( 4) 00:08:05.621 15.262 - 15.360: 96.9184% ( 7) 00:08:05.621 15.360 - 15.458: 97.0262% ( 10) 00:08:05.621 15.458 - 15.557: 97.1447% ( 11) 00:08:05.621 15.557 - 15.655: 97.2848% ( 13) 00:08:05.621 15.655 - 15.754: 97.3494% ( 6) 00:08:05.621 15.754 - 15.852: 97.3925% ( 4) 00:08:05.621 15.852 - 15.951: 97.5326% ( 13) 00:08:05.621 15.951 - 16.049: 97.6188% ( 8) 00:08:05.621 16.049 - 16.148: 97.6727% ( 5) 00:08:05.621 16.148 - 16.246: 97.7373% ( 6) 00:08:05.621 16.246 - 16.345: 97.8558% ( 11) 00:08:05.621 16.345 - 16.443: 97.9097% ( 5) 00:08:05.621 16.443 - 16.542: 98.0067% ( 9) 00:08:05.621 16.542 - 16.640: 98.0821% ( 7) 00:08:05.621 16.640 - 16.738: 98.1899% ( 10) 00:08:05.621 16.738 - 16.837: 98.2976% ( 10) 00:08:05.621 16.837 - 16.935: 98.4053% ( 10) 00:08:05.621 16.935 - 17.034: 98.5454% ( 13) 00:08:05.621 17.034 - 17.132: 98.6424% ( 9) 00:08:05.621 17.132 - 17.231: 98.6963% ( 5) 00:08:05.621 17.231 - 17.329: 98.8256% ( 12) 00:08:05.621 17.329 - 17.428: 98.9333% ( 10) 00:08:05.621 17.428 - 17.526: 99.0195% ( 8) 00:08:05.621 17.526 - 17.625: 99.0842% ( 6) 00:08:05.621 17.625 - 17.723: 99.1596% ( 7) 00:08:05.621 17.723 - 17.822: 99.2350% ( 7) 00:08:05.621 17.822 - 17.920: 99.2673% ( 3) 00:08:05.621 17.920 - 18.018: 99.2781% ( 1) 00:08:05.621 18.018 - 18.117: 99.3320% ( 5) 00:08:05.621 18.117 - 18.215: 99.3643% ( 3) 00:08:05.621 18.215 - 18.314: 99.3966% ( 3) 00:08:05.621 18.314 - 18.412: 99.4074% ( 1) 00:08:05.621 18.412 - 18.511: 99.4289% ( 2) 00:08:05.621 18.609 - 18.708: 99.4397% ( 1) 00:08:05.621 18.708 - 18.806: 99.4613% ( 2) 00:08:05.621 18.806 - 18.905: 99.4828% ( 2) 00:08:05.621 18.905 - 19.003: 99.4936% ( 1) 00:08:05.621 19.102 - 19.200: 99.5367% ( 4) 00:08:05.621 19.298 - 19.397: 99.5475% ( 1) 00:08:05.621 19.495 - 19.594: 99.5582% ( 1) 00:08:05.621 19.594 - 19.692: 99.5690% ( 1) 00:08:05.621 19.692 - 19.791: 99.5798% ( 1) 00:08:05.621 19.889 - 19.988: 99.5906% ( 1) 00:08:05.621 19.988 - 20.086: 99.6013% ( 1) 00:08:05.621 20.283 - 20.382: 99.6121% ( 1) 00:08:05.621 20.382 - 20.480: 99.6337% ( 2) 00:08:05.621 20.578 - 20.677: 99.6444% ( 1) 00:08:05.621 20.775 - 20.874: 99.6552% ( 1) 00:08:05.621 20.874 - 20.972: 99.6660% ( 1) 00:08:05.621 21.169 - 21.268: 99.7091% ( 4) 00:08:05.621 21.268 - 21.366: 99.7199% ( 1) 00:08:05.621 21.366 - 21.465: 99.7306% ( 1) 00:08:05.621 21.760 - 21.858: 99.7414% ( 1) 00:08:05.621 22.154 - 22.252: 99.7630% ( 2) 00:08:05.621 22.449 - 22.548: 99.7737% ( 1) 00:08:05.621 22.646 - 22.745: 99.7845% ( 1) 00:08:05.621 22.843 - 22.942: 99.7953% ( 1) 00:08:05.621 23.138 - 23.237: 99.8061% ( 1) 00:08:05.621 23.335 - 23.434: 99.8168% ( 1) 00:08:05.621 23.631 - 23.729: 99.8276% ( 1) 00:08:05.621 24.123 - 24.222: 99.8384% ( 1) 00:08:05.621 24.517 - 24.615: 99.8492% ( 1) 00:08:05.621 28.357 - 28.554: 99.8599% ( 1) 00:08:05.621 30.129 - 30.326: 99.8707% ( 1) 00:08:05.621 30.720 - 30.917: 99.8815% ( 1) 00:08:05.621 37.022 - 37.218: 99.8923% ( 1) 00:08:05.621 38.400 - 38.597: 99.9030% ( 1) 00:08:05.621 41.157 - 41.354: 99.9138% ( 1) 00:08:05.621 41.354 - 41.551: 99.9246% ( 1) 00:08:05.621 44.111 - 44.308: 99.9354% ( 1) 00:08:05.621 44.702 - 44.898: 99.9461% ( 1) 00:08:05.621 57.108 - 57.502: 99.9569% ( 1) 00:08:05.621 64.985 - 65.378: 99.9677% ( 1) 00:08:05.621 66.166 - 66.560: 99.9785% ( 1) 00:08:05.621 107.126 - 107.914: 99.9892% ( 1) 00:08:05.621 129.182 - 129.969: 100.0000% ( 1) 00:08:05.621 00:08:05.621 Complete histogram 00:08:05.621 ================== 00:08:05.621 Range in us Cumulative Count 00:08:05.621 7.188 - 7.237: 0.0862% ( 8) 00:08:05.621 7.237 - 7.286: 2.1765% ( 194) 00:08:05.621 7.286 - 7.335: 9.4710% ( 677) 00:08:05.622 7.335 - 7.385: 21.3555% ( 1103) 00:08:05.622 7.385 - 7.434: 34.1235% ( 1185) 00:08:05.622 7.434 - 7.483: 43.6375% ( 883) 00:08:05.622 7.483 - 7.532: 49.4667% ( 541) 00:08:05.622 7.532 - 7.582: 52.6883% ( 299) 00:08:05.622 7.582 - 7.631: 54.6493% ( 182) 00:08:05.622 7.631 - 7.680: 55.8776% ( 114) 00:08:05.622 7.680 - 7.729: 56.6642% ( 73) 00:08:05.622 7.729 - 7.778: 57.9679% ( 121) 00:08:05.622 7.778 - 7.828: 61.6097% ( 338) 00:08:05.622 7.828 - 7.877: 67.6651% ( 562) 00:08:05.622 7.877 - 7.926: 73.1818% ( 512) 00:08:05.622 7.926 - 7.975: 77.4486% ( 396) 00:08:05.622 7.975 - 8.025: 82.0386% ( 426) 00:08:05.622 8.025 - 8.074: 86.1222% ( 379) 00:08:05.622 8.074 - 8.123: 88.9236% ( 260) 00:08:05.622 8.123 - 8.172: 90.9385% ( 187) 00:08:05.622 8.172 - 8.222: 92.6193% ( 156) 00:08:05.622 8.222 - 8.271: 93.9985% ( 128) 00:08:05.622 8.271 - 8.320: 94.6881% ( 64) 00:08:05.622 8.320 - 8.369: 95.4531% ( 71) 00:08:05.622 8.369 - 8.418: 95.8948% ( 41) 00:08:05.622 8.418 - 8.468: 96.2289% ( 31) 00:08:05.622 8.468 - 8.517: 96.3905% ( 15) 00:08:05.622 8.517 - 8.566: 96.5736% ( 17) 00:08:05.622 8.566 - 8.615: 96.6814% ( 10) 00:08:05.622 8.615 - 8.665: 96.7676% ( 8) 00:08:05.622 8.665 - 8.714: 96.8215% ( 5) 00:08:05.622 8.714 - 8.763: 96.8753% ( 5) 00:08:05.622 8.763 - 8.812: 96.9184% ( 4) 00:08:05.622 8.812 - 8.862: 96.9615% ( 4) 00:08:05.622 8.862 - 8.911: 97.0585% ( 9) 00:08:05.622 8.911 - 8.960: 97.0801% ( 2) 00:08:05.622 8.960 - 9.009: 97.1124% ( 3) 00:08:05.622 9.009 - 9.058: 97.1447% ( 3) 00:08:05.622 9.058 - 9.108: 97.1663% ( 2) 00:08:05.622 9.108 - 9.157: 97.1770% ( 1) 00:08:05.622 9.206 - 9.255: 97.1878% ( 1) 00:08:05.622 9.255 - 9.305: 97.2201% ( 3) 00:08:05.622 9.305 - 9.354: 97.2417% ( 2) 00:08:05.622 9.354 - 9.403: 97.2632% ( 2) 00:08:05.622 9.403 - 9.452: 97.2848% ( 2) 00:08:05.622 9.452 - 9.502: 97.2956% ( 1) 00:08:05.622 9.502 - 9.551: 97.3279% ( 3) 00:08:05.622 9.551 - 9.600: 97.3386% ( 1) 00:08:05.622 9.600 - 9.649: 97.3494% ( 1) 00:08:05.622 9.649 - 9.698: 97.3710% ( 2) 00:08:05.622 9.698 - 9.748: 97.3817% ( 1) 00:08:05.622 9.748 - 9.797: 97.3925% ( 1) 00:08:05.622 9.797 - 9.846: 97.4141% ( 2) 00:08:05.622 9.846 - 9.895: 97.4356% ( 2) 00:08:05.622 9.945 - 9.994: 97.4464% ( 1) 00:08:05.622 9.994 - 10.043: 97.4572% ( 1) 00:08:05.622 10.043 - 10.092: 97.4679% ( 1) 00:08:05.622 10.092 - 10.142: 97.5003% ( 3) 00:08:05.622 10.191 - 10.240: 97.5326% ( 3) 00:08:05.622 10.289 - 10.338: 97.5434% ( 1) 00:08:05.622 10.437 - 10.486: 97.5541% ( 1) 00:08:05.622 10.486 - 10.535: 97.6080% ( 5) 00:08:05.622 10.535 - 10.585: 97.6403% ( 3) 00:08:05.622 10.585 - 10.634: 97.6834% ( 4) 00:08:05.622 10.634 - 10.683: 97.7158% ( 3) 00:08:05.622 10.683 - 10.732: 97.7265% ( 1) 00:08:05.622 10.732 - 10.782: 97.7912% ( 6) 00:08:05.622 10.782 - 10.831: 97.8127% ( 2) 00:08:05.622 10.831 - 10.880: 97.8989% ( 8) 00:08:05.622 10.880 - 10.929: 97.9528% ( 5) 00:08:05.622 10.929 - 10.978: 97.9851% ( 3) 00:08:05.622 11.028 - 11.077: 98.0175% ( 3) 00:08:05.622 11.077 - 11.126: 98.0606% ( 4) 00:08:05.622 11.126 - 11.175: 98.0821% ( 2) 00:08:05.622 11.175 - 11.225: 98.0929% ( 1) 00:08:05.622 11.225 - 11.274: 98.1144% ( 2) 00:08:05.622 11.274 - 11.323: 98.1360% ( 2) 00:08:05.622 11.323 - 11.372: 98.1791% ( 4) 00:08:05.622 11.372 - 11.422: 98.2114% ( 3) 00:08:05.622 11.422 - 11.471: 98.2545% ( 4) 00:08:05.622 11.520 - 11.569: 98.2976% ( 4) 00:08:05.622 11.569 - 11.618: 98.3084% ( 1) 00:08:05.622 11.618 - 11.668: 98.3299% ( 2) 00:08:05.622 11.668 - 11.717: 98.3515% ( 2) 00:08:05.622 11.717 - 11.766: 98.3622% ( 1) 00:08:05.622 11.766 - 11.815: 98.3730% ( 1) 00:08:05.622 11.865 - 11.914: 98.4269% ( 5) 00:08:05.622 11.914 - 11.963: 98.4484% ( 2) 00:08:05.622 11.963 - 12.012: 98.4700% ( 2) 00:08:05.622 12.111 - 12.160: 98.4808% ( 1) 00:08:05.622 12.160 - 12.209: 98.5023% ( 2) 00:08:05.622 12.209 - 12.258: 98.5131% ( 1) 00:08:05.622 12.258 - 12.308: 98.5239% ( 1) 00:08:05.622 12.308 - 12.357: 98.5454% ( 2) 00:08:05.622 12.357 - 12.406: 98.5670% ( 2) 00:08:05.622 12.455 - 12.505: 98.5777% ( 1) 00:08:05.622 12.505 - 12.554: 98.5885% ( 1) 00:08:05.622 12.603 - 12.702: 98.6316% ( 4) 00:08:05.622 12.702 - 12.800: 98.6532% ( 2) 00:08:05.622 12.800 - 12.898: 98.6639% ( 1) 00:08:05.622 12.898 - 12.997: 98.6963% ( 3) 00:08:05.622 12.997 - 13.095: 98.7070% ( 1) 00:08:05.622 13.095 - 13.194: 98.7178% ( 1) 00:08:05.622 13.194 - 13.292: 98.7932% ( 7) 00:08:05.622 13.292 - 13.391: 98.8579% ( 6) 00:08:05.622 13.391 - 13.489: 98.9118% ( 5) 00:08:05.622 13.489 - 13.588: 98.9980% ( 8) 00:08:05.622 13.588 - 13.686: 99.0734% ( 7) 00:08:05.622 13.686 - 13.785: 99.1596% ( 8) 00:08:05.622 13.785 - 13.883: 99.2350% ( 7) 00:08:05.622 13.883 - 13.982: 99.2996% ( 6) 00:08:05.622 13.982 - 14.080: 99.3427% ( 4) 00:08:05.622 14.080 - 14.178: 99.4074% ( 6) 00:08:05.622 14.178 - 14.277: 99.4397% ( 3) 00:08:05.622 14.277 - 14.375: 99.5151% ( 7) 00:08:05.622 14.375 - 14.474: 99.5690% ( 5) 00:08:05.622 14.572 - 14.671: 99.6229% ( 5) 00:08:05.622 14.671 - 14.769: 99.6337% ( 1) 00:08:05.622 14.769 - 14.868: 99.6552% ( 2) 00:08:05.622 14.868 - 14.966: 99.6983% ( 4) 00:08:05.622 14.966 - 15.065: 99.7091% ( 1) 00:08:05.622 15.065 - 15.163: 99.7199% ( 1) 00:08:05.622 15.163 - 15.262: 99.7414% ( 2) 00:08:05.622 15.262 - 15.360: 99.7522% ( 1) 00:08:05.622 16.049 - 16.148: 99.7630% ( 1) 00:08:05.622 16.443 - 16.542: 99.7737% ( 1) 00:08:05.622 16.542 - 16.640: 99.7953% ( 2) 00:08:05.622 17.034 - 17.132: 99.8061% ( 1) 00:08:05.622 17.822 - 17.920: 99.8168% ( 1) 00:08:05.622 18.018 - 18.117: 99.8276% ( 1) 00:08:05.622 18.215 - 18.314: 99.8492% ( 2) 00:08:05.622 18.609 - 18.708: 99.8599% ( 1) 00:08:05.622 18.806 - 18.905: 99.8707% ( 1) 00:08:05.622 18.905 - 19.003: 99.8923% ( 2) 00:08:05.622 19.889 - 19.988: 99.9030% ( 1) 00:08:05.622 20.972 - 21.071: 99.9138% ( 1) 00:08:05.622 22.449 - 22.548: 99.9246% ( 1) 00:08:05.622 22.843 - 22.942: 99.9354% ( 1) 00:08:05.622 23.237 - 23.335: 99.9461% ( 1) 00:08:05.622 23.828 - 23.926: 99.9569% ( 1) 00:08:05.622 24.123 - 24.222: 99.9677% ( 1) 00:08:05.622 28.751 - 28.948: 99.9785% ( 1) 00:08:05.622 46.474 - 46.671: 99.9892% ( 1) 00:08:05.622 318.228 - 319.803: 100.0000% ( 1) 00:08:05.622 00:08:05.622 00:08:05.622 real 0m1.217s 00:08:05.622 user 0m1.061s 00:08:05.622 sys 0m0.106s 00:08:05.622 21:39:24 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.622 21:39:24 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:05.622 ************************************ 00:08:05.622 END TEST nvme_overhead 00:08:05.622 ************************************ 00:08:05.622 21:39:24 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:05.622 21:39:24 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:05.622 21:39:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.622 21:39:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.622 ************************************ 00:08:05.622 START TEST nvme_arbitration 00:08:05.622 ************************************ 00:08:05.622 21:39:24 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:08.920 Initializing NVMe Controllers 00:08:08.920 Attached to 0000:00:10.0 00:08:08.920 Attached to 0000:00:11.0 00:08:08.920 Attached to 0000:00:13.0 00:08:08.920 Attached to 0000:00:12.0 00:08:08.920 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:08.920 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:08.920 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:08.920 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:08.920 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:08.920 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:08.920 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:08.920 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:08.920 Initialization complete. Launching workers. 00:08:08.920 Starting thread on core 1 with urgent priority queue 00:08:08.920 Starting thread on core 2 with urgent priority queue 00:08:08.920 Starting thread on core 3 with urgent priority queue 00:08:08.920 Starting thread on core 0 with urgent priority queue 00:08:08.920 QEMU NVMe Ctrl (12340 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:08:08.920 QEMU NVMe Ctrl (12342 ) core 0: 917.33 IO/s 109.01 secs/100000 ios 00:08:08.920 QEMU NVMe Ctrl (12341 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:08:08.920 QEMU NVMe Ctrl (12342 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:08:08.920 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:08:08.920 QEMU NVMe Ctrl (12342 ) core 3: 981.33 IO/s 101.90 secs/100000 ios 00:08:08.920 ======================================================== 00:08:08.920 00:08:08.920 00:08:08.920 real 0m3.312s 00:08:08.920 user 0m9.264s 00:08:08.920 sys 0m0.113s 00:08:08.920 21:39:27 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.920 ************************************ 00:08:08.920 END TEST nvme_arbitration 00:08:08.920 ************************************ 00:08:08.920 21:39:27 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 21:39:27 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:08.920 21:39:27 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:08.920 21:39:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.920 21:39:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 ************************************ 00:08:08.920 START TEST nvme_single_aen 00:08:08.920 ************************************ 00:08:08.920 21:39:27 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:09.181 Asynchronous Event Request test 00:08:09.181 Attached to 0000:00:10.0 00:08:09.181 Attached to 0000:00:11.0 00:08:09.181 Attached to 0000:00:13.0 00:08:09.181 Attached to 0000:00:12.0 00:08:09.181 Reset controller to setup AER completions for this process 00:08:09.181 Registering asynchronous event callbacks... 00:08:09.181 Getting orig temperature thresholds of all controllers 00:08:09.181 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.181 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.181 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.181 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:09.181 Setting all controllers temperature threshold low to trigger AER 00:08:09.181 Waiting for all controllers temperature threshold to be set lower 00:08:09.181 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.181 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:09.181 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.181 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:09.181 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.181 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:09.181 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:09.181 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:09.181 Waiting for all controllers to trigger AER and reset threshold 00:08:09.181 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.181 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.181 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.181 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.181 Cleaning up... 00:08:09.181 00:08:09.181 real 0m0.217s 00:08:09.181 user 0m0.061s 00:08:09.181 sys 0m0.110s 00:08:09.181 21:39:27 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.181 ************************************ 00:08:09.181 END TEST nvme_single_aen 00:08:09.181 21:39:27 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:09.181 ************************************ 00:08:09.181 21:39:28 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:09.181 21:39:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.181 21:39:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.181 21:39:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.181 ************************************ 00:08:09.181 START TEST nvme_doorbell_aers 00:08:09.181 ************************************ 00:08:09.181 21:39:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:09.181 21:39:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:09.181 21:39:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:09.181 21:39:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:09.181 21:39:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:09.181 21:39:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:09.181 21:39:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:09.181 21:39:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:09.181 21:39:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:09.182 21:39:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:09.182 21:39:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:09.182 21:39:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:09.182 21:39:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:09.182 21:39:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:09.443 [2024-09-29 21:39:28.239152] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:19.438 Executing: test_write_invalid_db 00:08:19.438 Waiting for AER completion... 00:08:19.438 Failure: test_write_invalid_db 00:08:19.438 00:08:19.438 Executing: test_invalid_db_write_overflow_sq 00:08:19.438 Waiting for AER completion... 00:08:19.438 Failure: test_invalid_db_write_overflow_sq 00:08:19.438 00:08:19.438 Executing: test_invalid_db_write_overflow_cq 00:08:19.438 Waiting for AER completion... 00:08:19.438 Failure: test_invalid_db_write_overflow_cq 00:08:19.438 00:08:19.438 21:39:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:19.438 21:39:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:19.438 [2024-09-29 21:39:38.290955] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:29.413 Executing: test_write_invalid_db 00:08:29.413 Waiting for AER completion... 00:08:29.413 Failure: test_write_invalid_db 00:08:29.413 00:08:29.413 Executing: test_invalid_db_write_overflow_sq 00:08:29.413 Waiting for AER completion... 00:08:29.413 Failure: test_invalid_db_write_overflow_sq 00:08:29.413 00:08:29.413 Executing: test_invalid_db_write_overflow_cq 00:08:29.413 Waiting for AER completion... 00:08:29.413 Failure: test_invalid_db_write_overflow_cq 00:08:29.413 00:08:29.413 21:39:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:29.413 21:39:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:29.413 [2024-09-29 21:39:48.318903] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:39.390 Executing: test_write_invalid_db 00:08:39.390 Waiting for AER completion... 00:08:39.390 Failure: test_write_invalid_db 00:08:39.390 00:08:39.390 Executing: test_invalid_db_write_overflow_sq 00:08:39.390 Waiting for AER completion... 00:08:39.390 Failure: test_invalid_db_write_overflow_sq 00:08:39.390 00:08:39.390 Executing: test_invalid_db_write_overflow_cq 00:08:39.390 Waiting for AER completion... 00:08:39.390 Failure: test_invalid_db_write_overflow_cq 00:08:39.390 00:08:39.390 21:39:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:39.390 21:39:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:39.390 [2024-09-29 21:39:58.355128] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.355 Executing: test_write_invalid_db 00:08:49.355 Waiting for AER completion... 00:08:49.355 Failure: test_write_invalid_db 00:08:49.355 00:08:49.355 Executing: test_invalid_db_write_overflow_sq 00:08:49.355 Waiting for AER completion... 00:08:49.355 Failure: test_invalid_db_write_overflow_sq 00:08:49.355 00:08:49.355 Executing: test_invalid_db_write_overflow_cq 00:08:49.355 Waiting for AER completion... 00:08:49.355 Failure: test_invalid_db_write_overflow_cq 00:08:49.355 00:08:49.355 00:08:49.355 real 0m40.182s 00:08:49.355 user 0m34.131s 00:08:49.355 sys 0m5.678s 00:08:49.355 21:40:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.355 21:40:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:49.355 ************************************ 00:08:49.355 END TEST nvme_doorbell_aers 00:08:49.355 ************************************ 00:08:49.355 21:40:08 nvme -- nvme/nvme.sh@97 -- # uname 00:08:49.355 21:40:08 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:49.355 21:40:08 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:49.355 21:40:08 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:49.355 21:40:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.355 21:40:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.355 ************************************ 00:08:49.355 START TEST nvme_multi_aen 00:08:49.356 ************************************ 00:08:49.356 21:40:08 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:49.614 [2024-09-29 21:40:08.407431] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.407490] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.407499] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.408872] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.408912] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.408921] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.409921] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.409949] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.409956] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.410910] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.410936] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 [2024-09-29 21:40:08.410943] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63617) is not found. Dropping the request. 00:08:49.614 Child process pid: 64143 00:08:49.872 [Child] Asynchronous Event Request test 00:08:49.872 [Child] Attached to 0000:00:10.0 00:08:49.872 [Child] Attached to 0000:00:11.0 00:08:49.872 [Child] Attached to 0000:00:13.0 00:08:49.872 [Child] Attached to 0000:00:12.0 00:08:49.872 [Child] Registering asynchronous event callbacks... 00:08:49.872 [Child] Getting orig temperature thresholds of all controllers 00:08:49.872 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.872 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.872 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.872 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.872 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:49.872 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.872 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.872 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.872 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.872 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.872 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.872 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.872 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.872 [Child] Cleaning up... 00:08:49.872 Asynchronous Event Request test 00:08:49.872 Attached to 0000:00:10.0 00:08:49.872 Attached to 0000:00:11.0 00:08:49.872 Attached to 0000:00:13.0 00:08:49.872 Attached to 0000:00:12.0 00:08:49.872 Reset controller to setup AER completions for this process 00:08:49.872 Registering asynchronous event callbacks... 00:08:49.872 Getting orig temperature thresholds of all controllers 00:08:49.872 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.872 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.872 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.872 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:49.872 Setting all controllers temperature threshold low to trigger AER 00:08:49.872 Waiting for all controllers temperature threshold to be set lower 00:08:49.872 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.872 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:49.872 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.872 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:49.872 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.872 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:49.872 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:49.872 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:49.872 Waiting for all controllers to trigger AER and reset threshold 00:08:49.872 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.872 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.872 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.872 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:49.872 Cleaning up... 00:08:49.872 00:08:49.872 real 0m0.432s 00:08:49.872 user 0m0.133s 00:08:49.872 sys 0m0.188s 00:08:49.872 21:40:08 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.872 21:40:08 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 ************************************ 00:08:49.872 END TEST nvme_multi_aen 00:08:49.872 ************************************ 00:08:49.872 21:40:08 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:49.872 21:40:08 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:49.872 21:40:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.872 21:40:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 ************************************ 00:08:49.872 START TEST nvme_startup 00:08:49.872 ************************************ 00:08:49.872 21:40:08 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:50.131 Initializing NVMe Controllers 00:08:50.131 Attached to 0000:00:10.0 00:08:50.131 Attached to 0000:00:11.0 00:08:50.131 Attached to 0000:00:13.0 00:08:50.131 Attached to 0000:00:12.0 00:08:50.131 Initialization complete. 00:08:50.131 Time used:139528.594 (us). 00:08:50.131 00:08:50.131 real 0m0.202s 00:08:50.131 user 0m0.063s 00:08:50.131 sys 0m0.095s 00:08:50.131 21:40:08 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.131 21:40:08 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:50.131 ************************************ 00:08:50.131 END TEST nvme_startup 00:08:50.131 ************************************ 00:08:50.131 21:40:08 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:50.131 21:40:08 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.131 21:40:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.131 21:40:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.131 ************************************ 00:08:50.131 START TEST nvme_multi_secondary 00:08:50.131 ************************************ 00:08:50.131 21:40:08 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:50.131 21:40:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64188 00:08:50.131 21:40:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:50.131 21:40:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64189 00:08:50.131 21:40:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:50.131 21:40:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:53.418 Initializing NVMe Controllers 00:08:53.418 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.418 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.418 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.418 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.418 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:53.419 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:53.419 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:53.419 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:53.419 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:53.419 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:53.419 Initialization complete. Launching workers. 00:08:53.419 ======================================================== 00:08:53.419 Latency(us) 00:08:53.419 Device Information : IOPS MiB/s Average min max 00:08:53.419 PCIE (0000:00:10.0) NSID 1 from core 1: 7934.13 30.99 2015.28 739.21 5452.43 00:08:53.419 PCIE (0000:00:11.0) NSID 1 from core 1: 7934.13 30.99 2016.21 747.10 5803.75 00:08:53.419 PCIE (0000:00:13.0) NSID 1 from core 1: 7934.13 30.99 2016.19 733.45 5731.77 00:08:53.419 PCIE (0000:00:12.0) NSID 1 from core 1: 7934.13 30.99 2016.17 720.44 5482.80 00:08:53.419 PCIE (0000:00:12.0) NSID 2 from core 1: 7934.13 30.99 2016.24 729.75 5318.83 00:08:53.419 PCIE (0000:00:12.0) NSID 3 from core 1: 7934.13 30.99 2016.21 744.05 4993.94 00:08:53.419 ======================================================== 00:08:53.419 Total : 47604.78 185.96 2016.05 720.44 5803.75 00:08:53.419 00:08:53.419 Initializing NVMe Controllers 00:08:53.419 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.419 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.419 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.419 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.419 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:53.419 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:53.419 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:53.419 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:53.419 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:53.419 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:53.419 Initialization complete. Launching workers. 00:08:53.419 ======================================================== 00:08:53.419 Latency(us) 00:08:53.419 Device Information : IOPS MiB/s Average min max 00:08:53.419 PCIE (0000:00:10.0) NSID 1 from core 2: 3235.77 12.64 4946.40 1059.87 12147.95 00:08:53.419 PCIE (0000:00:11.0) NSID 1 from core 2: 3235.77 12.64 4950.99 1110.03 12530.85 00:08:53.419 PCIE (0000:00:13.0) NSID 1 from core 2: 3235.77 12.64 4950.61 1125.30 12827.11 00:08:53.419 PCIE (0000:00:12.0) NSID 1 from core 2: 3235.77 12.64 4951.11 1222.59 13921.51 00:08:53.419 PCIE (0000:00:12.0) NSID 2 from core 2: 3235.77 12.64 4951.07 1222.61 15746.65 00:08:53.419 PCIE (0000:00:12.0) NSID 3 from core 2: 3235.77 12.64 4951.05 1065.24 15546.27 00:08:53.419 ======================================================== 00:08:53.419 Total : 19414.61 75.84 4950.21 1059.87 15746.65 00:08:53.419 00:08:53.679 21:40:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64188 00:08:55.581 Initializing NVMe Controllers 00:08:55.581 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.581 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.581 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.581 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.581 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:55.581 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:55.581 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:55.581 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:55.581 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:55.581 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:55.581 Initialization complete. Launching workers. 00:08:55.581 ======================================================== 00:08:55.581 Latency(us) 00:08:55.581 Device Information : IOPS MiB/s Average min max 00:08:55.581 PCIE (0000:00:10.0) NSID 1 from core 0: 10968.06 42.84 1457.57 691.47 6428.07 00:08:55.581 PCIE (0000:00:11.0) NSID 1 from core 0: 10968.06 42.84 1458.39 723.31 6258.31 00:08:55.581 PCIE (0000:00:13.0) NSID 1 from core 0: 10968.06 42.84 1458.37 666.96 5979.16 00:08:55.581 PCIE (0000:00:12.0) NSID 1 from core 0: 10968.06 42.84 1458.35 649.54 6970.38 00:08:55.581 PCIE (0000:00:12.0) NSID 2 from core 0: 10968.06 42.84 1458.32 627.33 6372.40 00:08:55.581 PCIE (0000:00:12.0) NSID 3 from core 0: 10968.06 42.84 1458.30 607.05 6574.40 00:08:55.581 ======================================================== 00:08:55.581 Total : 65808.33 257.06 1458.22 607.05 6970.38 00:08:55.581 00:08:55.581 21:40:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64189 00:08:55.581 21:40:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64258 00:08:55.581 21:40:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:55.581 21:40:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64259 00:08:55.581 21:40:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:55.581 21:40:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:58.868 Initializing NVMe Controllers 00:08:58.868 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:58.868 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:58.868 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:58.868 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:58.868 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:58.868 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:58.868 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:58.868 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:58.868 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:58.868 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:58.868 Initialization complete. Launching workers. 00:08:58.868 ======================================================== 00:08:58.868 Latency(us) 00:08:58.868 Device Information : IOPS MiB/s Average min max 00:08:58.868 PCIE (0000:00:10.0) NSID 1 from core 0: 7788.65 30.42 2052.92 688.34 6084.01 00:08:58.868 PCIE (0000:00:11.0) NSID 1 from core 0: 7788.65 30.42 2053.96 715.92 5430.11 00:08:58.868 PCIE (0000:00:13.0) NSID 1 from core 0: 7788.65 30.42 2054.23 729.90 5327.36 00:08:58.868 PCIE (0000:00:12.0) NSID 1 from core 0: 7788.65 30.42 2054.36 727.43 5472.22 00:08:58.868 PCIE (0000:00:12.0) NSID 2 from core 0: 7788.65 30.42 2054.33 727.77 5769.88 00:08:58.868 PCIE (0000:00:12.0) NSID 3 from core 0: 7788.65 30.42 2054.29 714.37 5782.59 00:08:58.868 ======================================================== 00:08:58.868 Total : 46731.87 182.55 2054.01 688.34 6084.01 00:08:58.868 00:08:58.868 Initializing NVMe Controllers 00:08:58.868 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:58.868 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:58.868 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:58.868 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:58.868 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:58.868 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:58.868 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:58.868 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:58.868 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:58.868 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:58.868 Initialization complete. Launching workers. 00:08:58.868 ======================================================== 00:08:58.868 Latency(us) 00:08:58.868 Device Information : IOPS MiB/s Average min max 00:08:58.868 PCIE (0000:00:10.0) NSID 1 from core 1: 7734.80 30.21 2067.31 749.05 5597.31 00:08:58.868 PCIE (0000:00:11.0) NSID 1 from core 1: 7734.80 30.21 2068.32 768.33 5550.14 00:08:58.868 PCIE (0000:00:13.0) NSID 1 from core 1: 7734.80 30.21 2068.35 764.12 5214.13 00:08:58.868 PCIE (0000:00:12.0) NSID 1 from core 1: 7734.80 30.21 2068.33 768.15 4918.84 00:08:58.868 PCIE (0000:00:12.0) NSID 2 from core 1: 7734.80 30.21 2068.32 769.98 5233.35 00:08:58.868 PCIE (0000:00:12.0) NSID 3 from core 1: 7734.80 30.21 2068.27 771.44 5194.21 00:08:58.868 ======================================================== 00:08:58.868 Total : 46408.83 181.28 2068.15 749.05 5597.31 00:08:58.868 00:09:00.781 Initializing NVMe Controllers 00:09:00.781 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.781 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.781 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.781 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.781 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:00.781 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:00.781 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:00.781 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:00.781 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:00.781 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:00.781 Initialization complete. Launching workers. 00:09:00.781 ======================================================== 00:09:00.781 Latency(us) 00:09:00.781 Device Information : IOPS MiB/s Average min max 00:09:00.781 PCIE (0000:00:10.0) NSID 1 from core 2: 4418.53 17.26 3618.85 765.31 12842.19 00:09:00.781 PCIE (0000:00:11.0) NSID 1 from core 2: 4418.53 17.26 3620.50 772.68 13027.78 00:09:00.781 PCIE (0000:00:13.0) NSID 1 from core 2: 4418.53 17.26 3620.63 712.87 13422.65 00:09:00.781 PCIE (0000:00:12.0) NSID 1 from core 2: 4418.53 17.26 3620.58 676.55 13759.19 00:09:00.781 PCIE (0000:00:12.0) NSID 2 from core 2: 4418.53 17.26 3620.52 626.19 13028.91 00:09:00.781 PCIE (0000:00:12.0) NSID 3 from core 2: 4418.53 17.26 3620.30 605.06 13397.23 00:09:00.781 ======================================================== 00:09:00.781 Total : 26511.17 103.56 3620.23 605.06 13759.19 00:09:00.781 00:09:00.781 21:40:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64258 00:09:00.781 21:40:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64259 00:09:00.781 00:09:00.781 real 0m10.691s 00:09:00.781 user 0m18.350s 00:09:00.781 sys 0m0.623s 00:09:00.781 21:40:19 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.781 21:40:19 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:00.781 ************************************ 00:09:00.781 END TEST nvme_multi_secondary 00:09:00.781 ************************************ 00:09:00.781 21:40:19 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:00.781 21:40:19 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:00.781 21:40:19 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/63225 ]] 00:09:00.781 21:40:19 nvme -- common/autotest_common.sh@1090 -- # kill 63225 00:09:00.781 21:40:19 nvme -- common/autotest_common.sh@1091 -- # wait 63225 00:09:00.781 [2024-09-29 21:40:19.663970] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.781 [2024-09-29 21:40:19.664065] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.664101] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.664125] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.667429] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.667541] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.667575] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.667608] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.670581] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.670668] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.670697] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.670727] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.674746] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.674834] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.674863] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:00.782 [2024-09-29 21:40:19.674894] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64137) is not found. Dropping the request. 00:09:01.077 21:40:19 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:01.077 21:40:19 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:01.077 21:40:19 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:01.077 21:40:19 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:01.077 21:40:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.077 21:40:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.077 ************************************ 00:09:01.077 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:01.077 ************************************ 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:01.077 * Looking for test storage... 00:09:01.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:01.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.077 --rc genhtml_branch_coverage=1 00:09:01.077 --rc genhtml_function_coverage=1 00:09:01.077 --rc genhtml_legend=1 00:09:01.077 --rc geninfo_all_blocks=1 00:09:01.077 --rc geninfo_unexecuted_blocks=1 00:09:01.077 00:09:01.077 ' 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:01.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.077 --rc genhtml_branch_coverage=1 00:09:01.077 --rc genhtml_function_coverage=1 00:09:01.077 --rc genhtml_legend=1 00:09:01.077 --rc geninfo_all_blocks=1 00:09:01.077 --rc geninfo_unexecuted_blocks=1 00:09:01.077 00:09:01.077 ' 00:09:01.077 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:01.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.077 --rc genhtml_branch_coverage=1 00:09:01.077 --rc genhtml_function_coverage=1 00:09:01.077 --rc genhtml_legend=1 00:09:01.077 --rc geninfo_all_blocks=1 00:09:01.078 --rc geninfo_unexecuted_blocks=1 00:09:01.078 00:09:01.078 ' 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:01.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.078 --rc genhtml_branch_coverage=1 00:09:01.078 --rc genhtml_function_coverage=1 00:09:01.078 --rc genhtml_legend=1 00:09:01.078 --rc geninfo_all_blocks=1 00:09:01.078 --rc geninfo_unexecuted_blocks=1 00:09:01.078 00:09:01.078 ' 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:01.078 21:40:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64426 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64426 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64426 ']' 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:01.078 21:40:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:01.340 [2024-09-29 21:40:20.102085] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:01.340 [2024-09-29 21:40:20.102575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64426 ] 00:09:01.340 [2024-09-29 21:40:20.263241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.598 [2024-09-29 21:40:20.454969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.598 [2024-09-29 21:40:20.455194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.598 [2024-09-29 21:40:20.455210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.598 [2024-09-29 21:40:20.454970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.165 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.165 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:02.165 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:02.165 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.165 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:02.424 nvme0n1 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_cICUh.txt 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:02.424 true 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727646021 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64449 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:02.424 21:40:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:04.334 [2024-09-29 21:40:23.204572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:04.334 [2024-09-29 21:40:23.204785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:04.334 [2024-09-29 21:40:23.204804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:04.334 [2024-09-29 21:40:23.204815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:04.334 [2024-09-29 21:40:23.208064] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64449 00:09:04.334 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64449 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64449 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_cICUh.txt 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_cICUh.txt 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64426 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64426 ']' 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64426 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.334 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64426 00:09:04.595 killing process with pid 64426 00:09:04.595 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.595 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.595 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64426' 00:09:04.595 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64426 00:09:04.595 21:40:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64426 00:09:05.980 21:40:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:05.980 21:40:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:05.980 00:09:05.980 real 0m5.047s 00:09:05.980 user 0m17.634s 00:09:05.980 sys 0m0.511s 00:09:05.980 21:40:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.980 21:40:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.980 ************************************ 00:09:05.980 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:05.980 ************************************ 00:09:05.980 21:40:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:05.980 21:40:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:05.980 21:40:24 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.980 21:40:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.980 21:40:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.980 ************************************ 00:09:05.980 START TEST nvme_fio 00:09:05.980 ************************************ 00:09:05.980 21:40:24 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:05.980 21:40:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:05.980 21:40:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:05.980 21:40:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:05.980 21:40:24 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:05.980 21:40:24 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:05.980 21:40:24 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:05.980 21:40:24 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:05.981 21:40:24 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:06.241 21:40:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:06.241 21:40:24 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:06.241 21:40:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:06.241 21:40:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:06.241 21:40:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:06.241 21:40:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:06.241 21:40:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:06.241 21:40:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:06.241 21:40:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:06.502 21:40:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:06.502 21:40:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:06.502 21:40:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:06.763 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:06.763 fio-3.35 00:09:06.763 Starting 1 thread 00:09:13.346 00:09:13.346 test: (groupid=0, jobs=1): err= 0: pid=64589: Sun Sep 29 21:40:32 2024 00:09:13.346 read: IOPS=23.1k, BW=90.3MiB/s (94.7MB/s)(181MiB/2001msec) 00:09:13.346 slat (nsec): min=3351, max=84995, avg=4951.83, stdev=1981.69 00:09:13.346 clat (usec): min=158, max=8286, avg=2761.45, stdev=761.65 00:09:13.346 lat (usec): min=162, max=8291, avg=2766.40, stdev=762.80 00:09:13.346 clat percentiles (usec): 00:09:13.346 | 1.00th=[ 2024], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2343], 00:09:13.346 | 30.00th=[ 2409], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:09:13.346 | 70.00th=[ 2737], 80.00th=[ 2868], 90.00th=[ 3392], 95.00th=[ 4424], 00:09:13.346 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 7111], 99.95th=[ 7701], 00:09:13.346 | 99.99th=[ 8160] 00:09:13.346 bw ( KiB/s): min=84840, max=94656, per=97.16%, avg=89866.67, stdev=4912.30, samples=3 00:09:13.346 iops : min=21210, max=23664, avg=22466.67, stdev=1228.08, samples=3 00:09:13.346 write: IOPS=23.0k, BW=89.8MiB/s (94.1MB/s)(180MiB/2001msec); 0 zone resets 00:09:13.346 slat (nsec): min=3478, max=49776, avg=5185.51, stdev=1894.65 00:09:13.346 clat (usec): min=174, max=8334, avg=2767.46, stdev=763.24 00:09:13.346 lat (usec): min=178, max=8349, avg=2772.65, stdev=764.33 00:09:13.346 clat percentiles (usec): 00:09:13.346 | 1.00th=[ 2024], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2343], 00:09:13.346 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2671], 00:09:13.346 | 70.00th=[ 2737], 80.00th=[ 2868], 90.00th=[ 3392], 95.00th=[ 4490], 00:09:13.346 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7373], 99.95th=[ 7898], 00:09:13.346 | 99.99th=[ 8160] 00:09:13.346 bw ( KiB/s): min=84720, max=95552, per=97.95%, avg=90058.67, stdev=5417.66, samples=3 00:09:13.346 iops : min=21180, max=23888, avg=22514.67, stdev=1354.41, samples=3 00:09:13.346 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:13.346 lat (msec) : 2=0.75%, 4=92.81%, 10=6.39% 00:09:13.346 cpu : usr=99.30%, sys=0.00%, ctx=5, majf=0, minf=607 00:09:13.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:13.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.346 issued rwts: total=46269,45994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.346 00:09:13.346 Run status group 0 (all jobs): 00:09:13.346 READ: bw=90.3MiB/s (94.7MB/s), 90.3MiB/s-90.3MiB/s (94.7MB/s-94.7MB/s), io=181MiB (190MB), run=2001-2001msec 00:09:13.346 WRITE: bw=89.8MiB/s (94.1MB/s), 89.8MiB/s-89.8MiB/s (94.1MB/s-94.1MB/s), io=180MiB (188MB), run=2001-2001msec 00:09:13.601 ----------------------------------------------------- 00:09:13.601 Suppressions used: 00:09:13.601 count bytes template 00:09:13.601 1 32 /usr/src/fio/parse.c 00:09:13.601 1 8 libtcmalloc_minimal.so 00:09:13.601 ----------------------------------------------------- 00:09:13.601 00:09:13.601 21:40:32 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:13.601 21:40:32 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:13.601 21:40:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.601 21:40:32 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:13.859 21:40:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:13.859 21:40:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:14.117 21:40:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:14.117 21:40:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:14.117 21:40:32 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:14.117 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:14.117 fio-3.35 00:09:14.117 Starting 1 thread 00:09:20.707 00:09:20.707 test: (groupid=0, jobs=1): err= 0: pid=64646: Sun Sep 29 21:40:38 2024 00:09:20.707 read: IOPS=19.3k, BW=75.2MiB/s (78.9MB/s)(151MiB/2001msec) 00:09:20.707 slat (nsec): min=3353, max=77558, avg=5424.08, stdev=2862.74 00:09:20.707 clat (usec): min=220, max=10197, avg=3308.88, stdev=1190.45 00:09:20.707 lat (usec): min=225, max=10249, avg=3314.31, stdev=1191.74 00:09:20.707 clat percentiles (usec): 00:09:20.707 | 1.00th=[ 1958], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:20.707 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 3032], 00:09:20.707 | 70.00th=[ 3490], 80.00th=[ 4293], 90.00th=[ 5145], 95.00th=[ 5800], 00:09:20.707 | 99.00th=[ 7111], 99.50th=[ 7373], 99.90th=[ 8160], 99.95th=[ 8586], 00:09:20.707 | 99.99th=[10159] 00:09:20.707 bw ( KiB/s): min=74792, max=81920, per=100.00%, avg=78592.00, stdev=3587.36, samples=3 00:09:20.707 iops : min=18698, max=20480, avg=19648.00, stdev=896.84, samples=3 00:09:20.707 write: IOPS=19.2k, BW=75.1MiB/s (78.8MB/s)(150MiB/2001msec); 0 zone resets 00:09:20.707 slat (usec): min=3, max=103, avg= 5.69, stdev= 2.97 00:09:20.707 clat (usec): min=194, max=10137, avg=3316.83, stdev=1190.53 00:09:20.707 lat (usec): min=199, max=10148, avg=3322.51, stdev=1191.87 00:09:20.707 clat percentiles (usec): 00:09:20.707 | 1.00th=[ 2024], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:20.707 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 3032], 00:09:20.707 | 70.00th=[ 3458], 80.00th=[ 4293], 90.00th=[ 5211], 95.00th=[ 5866], 00:09:20.707 | 99.00th=[ 7111], 99.50th=[ 7373], 99.90th=[ 8225], 99.95th=[ 8717], 00:09:20.707 | 99.99th=[10028] 00:09:20.707 bw ( KiB/s): min=74888, max=82264, per=100.00%, avg=78680.00, stdev=3692.40, samples=3 00:09:20.707 iops : min=18722, max=20566, avg=19670.00, stdev=923.10, samples=3 00:09:20.707 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.03% 00:09:20.707 lat (msec) : 2=0.98%, 4=75.80%, 10=23.14%, 20=0.01% 00:09:20.707 cpu : usr=99.05%, sys=0.05%, ctx=6, majf=0, minf=607 00:09:20.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:20.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.707 issued rwts: total=38539,38485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.707 00:09:20.707 Run status group 0 (all jobs): 00:09:20.707 READ: bw=75.2MiB/s (78.9MB/s), 75.2MiB/s-75.2MiB/s (78.9MB/s-78.9MB/s), io=151MiB (158MB), run=2001-2001msec 00:09:20.707 WRITE: bw=75.1MiB/s (78.8MB/s), 75.1MiB/s-75.1MiB/s (78.8MB/s-78.8MB/s), io=150MiB (158MB), run=2001-2001msec 00:09:20.707 ----------------------------------------------------- 00:09:20.707 Suppressions used: 00:09:20.707 count bytes template 00:09:20.707 1 32 /usr/src/fio/parse.c 00:09:20.707 1 8 libtcmalloc_minimal.so 00:09:20.707 ----------------------------------------------------- 00:09:20.707 00:09:20.707 21:40:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:20.707 21:40:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:20.707 21:40:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.707 21:40:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:20.707 21:40:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:20.707 21:40:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.707 21:40:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:20.707 21:40:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:20.707 21:40:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.707 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:20.707 fio-3.35 00:09:20.707 Starting 1 thread 00:09:28.822 00:09:28.822 test: (groupid=0, jobs=1): err= 0: pid=64707: Sun Sep 29 21:40:46 2024 00:09:28.822 read: IOPS=21.4k, BW=83.6MiB/s (87.7MB/s)(167MiB/2001msec) 00:09:28.822 slat (nsec): min=4244, max=48910, avg=5344.04, stdev=2404.29 00:09:28.822 clat (usec): min=862, max=8985, avg=2990.53, stdev=935.90 00:09:28.822 lat (usec): min=868, max=8998, avg=2995.87, stdev=937.24 00:09:28.822 clat percentiles (usec): 00:09:28.822 | 1.00th=[ 2212], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2474], 00:09:28.822 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:28.822 | 70.00th=[ 2835], 80.00th=[ 3261], 90.00th=[ 4293], 95.00th=[ 5342], 00:09:28.822 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7767], 99.95th=[ 8094], 00:09:28.822 | 99.99th=[ 8455] 00:09:28.822 bw ( KiB/s): min=86312, max=94080, per=100.00%, avg=88904.00, stdev=4482.55, samples=3 00:09:28.822 iops : min=21578, max=23520, avg=22226.00, stdev=1120.64, samples=3 00:09:28.822 write: IOPS=21.2k, BW=83.0MiB/s (87.0MB/s)(166MiB/2001msec); 0 zone resets 00:09:28.822 slat (nsec): min=4336, max=80540, avg=5654.45, stdev=2471.40 00:09:28.822 clat (usec): min=873, max=9156, avg=2987.27, stdev=935.68 00:09:28.822 lat (usec): min=878, max=9170, avg=2992.93, stdev=937.06 00:09:28.822 clat percentiles (usec): 00:09:28.822 | 1.00th=[ 2245], 5.00th=[ 2376], 10.00th=[ 2409], 20.00th=[ 2474], 00:09:28.822 | 30.00th=[ 2540], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:28.822 | 70.00th=[ 2802], 80.00th=[ 3228], 90.00th=[ 4293], 95.00th=[ 5342], 00:09:28.822 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7832], 99.95th=[ 8094], 00:09:28.822 | 99.99th=[ 8717] 00:09:28.822 bw ( KiB/s): min=85888, max=94880, per=100.00%, avg=89066.67, stdev=5041.84, samples=3 00:09:28.822 iops : min=21472, max=23720, avg=22266.67, stdev=1260.46, samples=3 00:09:28.822 lat (usec) : 1000=0.02% 00:09:28.822 lat (msec) : 2=0.15%, 4=87.93%, 10=11.90% 00:09:28.822 cpu : usr=99.20%, sys=0.00%, ctx=3, majf=0, minf=607 00:09:28.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:28.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.822 issued rwts: total=42844,42519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.822 00:09:28.822 Run status group 0 (all jobs): 00:09:28.822 READ: bw=83.6MiB/s (87.7MB/s), 83.6MiB/s-83.6MiB/s (87.7MB/s-87.7MB/s), io=167MiB (175MB), run=2001-2001msec 00:09:28.822 WRITE: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=166MiB (174MB), run=2001-2001msec 00:09:28.822 ----------------------------------------------------- 00:09:28.822 Suppressions used: 00:09:28.822 count bytes template 00:09:28.822 1 32 /usr/src/fio/parse.c 00:09:28.822 1 8 libtcmalloc_minimal.so 00:09:28.822 ----------------------------------------------------- 00:09:28.822 00:09:28.822 21:40:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:28.822 21:40:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:28.822 21:40:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:28.822 21:40:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:28.822 21:40:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:28.822 21:40:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:28.822 21:40:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:28.822 21:40:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:28.822 21:40:47 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:28.822 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:28.822 fio-3.35 00:09:28.822 Starting 1 thread 00:09:38.815 00:09:38.815 test: (groupid=0, jobs=1): err= 0: pid=64768: Sun Sep 29 21:40:56 2024 00:09:38.815 read: IOPS=18.5k, BW=72.2MiB/s (75.8MB/s)(145MiB/2001msec) 00:09:38.815 slat (nsec): min=4225, max=97536, avg=5567.15, stdev=2994.85 00:09:38.815 clat (usec): min=266, max=10362, avg=3435.23, stdev=1181.23 00:09:38.815 lat (usec): min=270, max=10460, avg=3440.80, stdev=1182.51 00:09:38.815 clat percentiles (usec): 00:09:38.815 | 1.00th=[ 2089], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:38.815 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2999], 60.00th=[ 3261], 00:09:38.815 | 70.00th=[ 3720], 80.00th=[ 4424], 90.00th=[ 5276], 95.00th=[ 5932], 00:09:38.815 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 8291], 99.95th=[ 8717], 00:09:38.815 | 99.99th=[10290] 00:09:38.815 bw ( KiB/s): min=65544, max=80288, per=100.00%, avg=74409.67, stdev=7812.78, samples=3 00:09:38.815 iops : min=16386, max=20072, avg=18602.33, stdev=1953.15, samples=3 00:09:38.815 write: IOPS=18.5k, BW=72.3MiB/s (75.8MB/s)(145MiB/2001msec); 0 zone resets 00:09:38.815 slat (nsec): min=4296, max=81745, avg=5786.45, stdev=2792.68 00:09:38.815 clat (usec): min=257, max=10287, avg=3455.73, stdev=1186.31 00:09:38.815 lat (usec): min=261, max=10300, avg=3461.51, stdev=1187.48 00:09:38.815 clat percentiles (usec): 00:09:38.815 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:38.815 | 30.00th=[ 2638], 40.00th=[ 2802], 50.00th=[ 3032], 60.00th=[ 3294], 00:09:38.815 | 70.00th=[ 3752], 80.00th=[ 4490], 90.00th=[ 5342], 95.00th=[ 5932], 00:09:38.815 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 8291], 99.95th=[ 8586], 00:09:38.815 | 99.99th=[10159] 00:09:38.815 bw ( KiB/s): min=65936, max=80400, per=100.00%, avg=74508.33, stdev=7595.48, samples=3 00:09:38.815 iops : min=16484, max=20100, avg=18627.00, stdev=1898.83, samples=3 00:09:38.815 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:38.815 lat (msec) : 2=0.61%, 4=73.07%, 10=26.28%, 20=0.02% 00:09:38.815 cpu : usr=98.85%, sys=0.10%, ctx=3, majf=0, minf=605 00:09:38.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:38.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.815 issued rwts: total=37009,37034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.815 00:09:38.815 Run status group 0 (all jobs): 00:09:38.815 READ: bw=72.2MiB/s (75.8MB/s), 72.2MiB/s-72.2MiB/s (75.8MB/s-75.8MB/s), io=145MiB (152MB), run=2001-2001msec 00:09:38.815 WRITE: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=145MiB (152MB), run=2001-2001msec 00:09:38.815 ----------------------------------------------------- 00:09:38.815 Suppressions used: 00:09:38.815 count bytes template 00:09:38.815 1 32 /usr/src/fio/parse.c 00:09:38.815 1 8 libtcmalloc_minimal.so 00:09:38.815 ----------------------------------------------------- 00:09:38.815 00:09:38.815 21:40:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:38.815 21:40:56 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:38.815 00:09:38.815 real 0m31.473s 00:09:38.815 user 0m21.251s 00:09:38.815 sys 0m17.909s 00:09:38.815 ************************************ 00:09:38.815 END TEST nvme_fio 00:09:38.815 ************************************ 00:09:38.815 21:40:56 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.816 21:40:56 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:38.816 00:09:38.816 real 1m40.617s 00:09:38.816 user 3m42.064s 00:09:38.816 sys 0m28.238s 00:09:38.816 21:40:56 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.816 21:40:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.816 ************************************ 00:09:38.816 END TEST nvme 00:09:38.816 ************************************ 00:09:38.816 21:40:56 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:38.816 21:40:56 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:38.816 21:40:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.816 21:40:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.816 21:40:56 -- common/autotest_common.sh@10 -- # set +x 00:09:38.816 ************************************ 00:09:38.816 START TEST nvme_scc 00:09:38.816 ************************************ 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:38.816 * Looking for test storage... 00:09:38.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:38.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.816 --rc genhtml_branch_coverage=1 00:09:38.816 --rc genhtml_function_coverage=1 00:09:38.816 --rc genhtml_legend=1 00:09:38.816 --rc geninfo_all_blocks=1 00:09:38.816 --rc geninfo_unexecuted_blocks=1 00:09:38.816 00:09:38.816 ' 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:38.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.816 --rc genhtml_branch_coverage=1 00:09:38.816 --rc genhtml_function_coverage=1 00:09:38.816 --rc genhtml_legend=1 00:09:38.816 --rc geninfo_all_blocks=1 00:09:38.816 --rc geninfo_unexecuted_blocks=1 00:09:38.816 00:09:38.816 ' 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:38.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.816 --rc genhtml_branch_coverage=1 00:09:38.816 --rc genhtml_function_coverage=1 00:09:38.816 --rc genhtml_legend=1 00:09:38.816 --rc geninfo_all_blocks=1 00:09:38.816 --rc geninfo_unexecuted_blocks=1 00:09:38.816 00:09:38.816 ' 00:09:38.816 21:40:56 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:38.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.816 --rc genhtml_branch_coverage=1 00:09:38.816 --rc genhtml_function_coverage=1 00:09:38.816 --rc genhtml_legend=1 00:09:38.816 --rc geninfo_all_blocks=1 00:09:38.816 --rc geninfo_unexecuted_blocks=1 00:09:38.816 00:09:38.816 ' 00:09:38.816 21:40:56 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.816 21:40:56 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.816 21:40:56 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.816 21:40:56 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.816 21:40:56 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.816 21:40:56 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:38.816 21:40:56 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:38.816 21:40:56 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:38.816 21:40:56 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.816 21:40:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:38.816 21:40:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:38.816 21:40:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:38.816 21:40:56 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:38.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.816 Waiting for block devices as requested 00:09:38.816 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.816 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.816 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.816 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.100 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:44.100 21:41:02 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:44.100 21:41:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:44.100 21:41:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:44.100 21:41:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.100 21:41:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:44.100 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:44.101 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:44.102 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:44.103 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:44.104 21:41:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:44.104 21:41:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:44.104 21:41:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:44.105 21:41:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.105 21:41:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.105 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.106 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.107 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:44.108 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:44.109 21:41:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:44.109 21:41:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:44.109 21:41:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.109 21:41:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:44.109 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.110 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:44.111 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.112 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.113 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:44.114 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:44.115 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:44.116 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:44.117 21:41:02 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:44.117 21:41:02 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:44.117 21:41:02 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:44.117 21:41:02 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.117 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:44.118 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:44.119 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:44.120 21:41:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:44.120 21:41:02 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:44.120 21:41:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:44.120 21:41:02 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:44.120 21:41:02 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:44.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.643 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.643 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.643 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.904 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.904 21:41:03 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:44.904 21:41:03 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:44.904 21:41:03 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.904 21:41:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:44.904 ************************************ 00:09:44.904 START TEST nvme_simple_copy 00:09:44.904 ************************************ 00:09:44.904 21:41:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:45.166 Initializing NVMe Controllers 00:09:45.166 Attaching to 0000:00:10.0 00:09:45.166 Controller supports SCC. Attached to 0000:00:10.0 00:09:45.166 Namespace ID: 1 size: 6GB 00:09:45.166 Initialization complete. 00:09:45.166 00:09:45.166 Controller QEMU NVMe Ctrl (12340 ) 00:09:45.166 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:45.166 Namespace Block Size:4096 00:09:45.166 Writing LBAs 0 to 63 with Random Data 00:09:45.166 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:45.166 LBAs matching Written Data: 64 00:09:45.166 00:09:45.166 real 0m0.241s 00:09:45.166 user 0m0.090s 00:09:45.166 sys 0m0.050s 00:09:45.166 21:41:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.166 21:41:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:45.166 ************************************ 00:09:45.166 END TEST nvme_simple_copy 00:09:45.166 ************************************ 00:09:45.166 00:09:45.166 real 0m7.505s 00:09:45.166 user 0m0.934s 00:09:45.166 sys 0m1.323s 00:09:45.166 21:41:03 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.166 21:41:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:45.166 ************************************ 00:09:45.166 END TEST nvme_scc 00:09:45.166 ************************************ 00:09:45.166 21:41:04 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:45.166 21:41:04 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:45.166 21:41:04 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:45.166 21:41:04 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:45.166 21:41:04 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:45.166 21:41:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:45.166 21:41:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.166 21:41:04 -- common/autotest_common.sh@10 -- # set +x 00:09:45.166 ************************************ 00:09:45.166 START TEST nvme_fdp 00:09:45.166 ************************************ 00:09:45.166 21:41:04 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:45.166 * Looking for test storage... 00:09:45.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.166 21:41:04 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:45.166 21:41:04 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:45.166 21:41:04 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:45.428 21:41:04 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:45.428 21:41:04 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.428 21:41:04 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:45.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.428 --rc genhtml_branch_coverage=1 00:09:45.428 --rc genhtml_function_coverage=1 00:09:45.428 --rc genhtml_legend=1 00:09:45.428 --rc geninfo_all_blocks=1 00:09:45.428 --rc geninfo_unexecuted_blocks=1 00:09:45.428 00:09:45.428 ' 00:09:45.428 21:41:04 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:45.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.428 --rc genhtml_branch_coverage=1 00:09:45.428 --rc genhtml_function_coverage=1 00:09:45.428 --rc genhtml_legend=1 00:09:45.428 --rc geninfo_all_blocks=1 00:09:45.428 --rc geninfo_unexecuted_blocks=1 00:09:45.428 00:09:45.428 ' 00:09:45.428 21:41:04 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:45.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.428 --rc genhtml_branch_coverage=1 00:09:45.428 --rc genhtml_function_coverage=1 00:09:45.428 --rc genhtml_legend=1 00:09:45.428 --rc geninfo_all_blocks=1 00:09:45.428 --rc geninfo_unexecuted_blocks=1 00:09:45.428 00:09:45.428 ' 00:09:45.428 21:41:04 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:45.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.428 --rc genhtml_branch_coverage=1 00:09:45.428 --rc genhtml_function_coverage=1 00:09:45.428 --rc genhtml_legend=1 00:09:45.428 --rc geninfo_all_blocks=1 00:09:45.428 --rc geninfo_unexecuted_blocks=1 00:09:45.428 00:09:45.428 ' 00:09:45.428 21:41:04 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:45.428 21:41:04 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:45.428 21:41:04 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:45.428 21:41:04 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:45.428 21:41:04 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.428 21:41:04 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.428 21:41:04 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.428 21:41:04 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.428 21:41:04 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.429 21:41:04 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:45.429 21:41:04 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.429 21:41:04 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:45.429 21:41:04 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:45.429 21:41:04 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:45.429 21:41:04 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:45.429 21:41:04 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:45.429 21:41:04 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:45.429 21:41:04 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:45.429 21:41:04 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:45.429 21:41:04 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:45.429 21:41:04 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.429 21:41:04 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:45.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:45.690 Waiting for block devices as requested 00:09:45.690 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.690 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.951 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.951 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.248 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:51.248 21:41:09 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:51.248 21:41:09 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:51.248 21:41:09 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:51.248 21:41:09 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.248 21:41:09 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:51.248 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:51.249 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.250 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.251 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:51.252 21:41:09 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:51.252 21:41:09 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:51.252 21:41:09 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:51.253 21:41:09 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.253 21:41:09 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:51.253 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:51.254 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.255 21:41:09 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.256 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:51.257 21:41:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:51.257 21:41:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:51.257 21:41:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.257 21:41:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:51.257 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.258 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.259 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:51.260 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.261 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.262 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:51.263 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:51.264 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:51.265 21:41:10 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:51.265 21:41:10 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:51.265 21:41:10 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.265 21:41:10 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:51.265 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:51.266 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.267 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:51.268 21:41:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:51.268 21:41:10 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:51.268 21:41:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:51.268 21:41:10 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:51.268 21:41:10 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:51.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.094 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.094 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.094 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.353 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:52.353 21:41:11 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:52.353 21:41:11 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:52.353 21:41:11 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.353 21:41:11 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:52.353 ************************************ 00:09:52.353 START TEST nvme_flexible_data_placement 00:09:52.353 ************************************ 00:09:52.353 21:41:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:52.612 Initializing NVMe Controllers 00:09:52.612 Attaching to 0000:00:13.0 00:09:52.612 Controller supports FDP Attached to 0000:00:13.0 00:09:52.612 Namespace ID: 1 Endurance Group ID: 1 00:09:52.612 Initialization complete. 00:09:52.612 00:09:52.612 ================================== 00:09:52.612 == FDP tests for Namespace: #01 == 00:09:52.612 ================================== 00:09:52.612 00:09:52.612 Get Feature: FDP: 00:09:52.612 ================= 00:09:52.612 Enabled: Yes 00:09:52.612 FDP configuration Index: 0 00:09:52.612 00:09:52.612 FDP configurations log page 00:09:52.612 =========================== 00:09:52.612 Number of FDP configurations: 1 00:09:52.612 Version: 0 00:09:52.612 Size: 112 00:09:52.612 FDP Configuration Descriptor: 0 00:09:52.612 Descriptor Size: 96 00:09:52.612 Reclaim Group Identifier format: 2 00:09:52.612 FDP Volatile Write Cache: Not Present 00:09:52.612 FDP Configuration: Valid 00:09:52.612 Vendor Specific Size: 0 00:09:52.612 Number of Reclaim Groups: 2 00:09:52.612 Number of Recalim Unit Handles: 8 00:09:52.612 Max Placement Identifiers: 128 00:09:52.612 Number of Namespaces Suppprted: 256 00:09:52.612 Reclaim unit Nominal Size: 6000000 bytes 00:09:52.612 Estimated Reclaim Unit Time Limit: Not Reported 00:09:52.612 RUH Desc #000: RUH Type: Initially Isolated 00:09:52.612 RUH Desc #001: RUH Type: Initially Isolated 00:09:52.612 RUH Desc #002: RUH Type: Initially Isolated 00:09:52.612 RUH Desc #003: RUH Type: Initially Isolated 00:09:52.612 RUH Desc #004: RUH Type: Initially Isolated 00:09:52.612 RUH Desc #005: RUH Type: Initially Isolated 00:09:52.612 RUH Desc #006: RUH Type: Initially Isolated 00:09:52.612 RUH Desc #007: RUH Type: Initially Isolated 00:09:52.612 00:09:52.612 FDP reclaim unit handle usage log page 00:09:52.612 ====================================== 00:09:52.612 Number of Reclaim Unit Handles: 8 00:09:52.612 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:52.612 RUH Usage Desc #001: RUH Attributes: Unused 00:09:52.612 RUH Usage Desc #002: RUH Attributes: Unused 00:09:52.612 RUH Usage Desc #003: RUH Attributes: Unused 00:09:52.612 RUH Usage Desc #004: RUH Attributes: Unused 00:09:52.612 RUH Usage Desc #005: RUH Attributes: Unused 00:09:52.612 RUH Usage Desc #006: RUH Attributes: Unused 00:09:52.612 RUH Usage Desc #007: RUH Attributes: Unused 00:09:52.612 00:09:52.612 FDP statistics log page 00:09:52.612 ======================= 00:09:52.612 Host bytes with metadata written: 990400512 00:09:52.612 Media bytes with metadata written: 990515200 00:09:52.612 Media bytes erased: 0 00:09:52.612 00:09:52.612 FDP Reclaim unit handle status 00:09:52.612 ============================== 00:09:52.612 Number of RUHS descriptors: 2 00:09:52.612 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000f7b 00:09:52.612 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:52.612 00:09:52.612 FDP write on placement id: 0 success 00:09:52.612 00:09:52.612 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:52.612 00:09:52.612 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:52.612 00:09:52.612 Get Feature: FDP Events for Placement handle: #0 00:09:52.612 ======================== 00:09:52.612 Number of FDP Events: 6 00:09:52.612 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:52.612 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:52.612 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:52.612 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:52.612 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:52.612 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:52.612 00:09:52.612 FDP events log page 00:09:52.612 =================== 00:09:52.612 Number of FDP events: 1 00:09:52.612 FDP Event #0: 00:09:52.612 Event Type: RU Not Written to Capacity 00:09:52.612 Placement Identifier: Valid 00:09:52.612 NSID: Valid 00:09:52.612 Location: Valid 00:09:52.612 Placement Identifier: 0 00:09:52.612 Event Timestamp: 5 00:09:52.612 Namespace Identifier: 1 00:09:52.612 Reclaim Group Identifier: 0 00:09:52.612 Reclaim Unit Handle Identifier: 0 00:09:52.612 00:09:52.612 FDP test passed 00:09:52.612 ************************************ 00:09:52.612 END TEST nvme_flexible_data_placement 00:09:52.612 ************************************ 00:09:52.612 00:09:52.612 real 0m0.213s 00:09:52.612 user 0m0.052s 00:09:52.612 sys 0m0.059s 00:09:52.612 21:41:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.612 21:41:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:52.612 ************************************ 00:09:52.612 END TEST nvme_fdp 00:09:52.612 ************************************ 00:09:52.612 00:09:52.612 real 0m7.396s 00:09:52.612 user 0m0.940s 00:09:52.612 sys 0m1.367s 00:09:52.612 21:41:11 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.612 21:41:11 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:52.612 21:41:11 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:52.612 21:41:11 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:52.612 21:41:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:52.612 21:41:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.612 21:41:11 -- common/autotest_common.sh@10 -- # set +x 00:09:52.612 ************************************ 00:09:52.612 START TEST nvme_rpc 00:09:52.612 ************************************ 00:09:52.612 21:41:11 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:52.612 * Looking for test storage... 00:09:52.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:52.612 21:41:11 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:52.612 21:41:11 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:52.612 21:41:11 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:52.612 21:41:11 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:52.612 21:41:11 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.612 21:41:11 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.612 21:41:11 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.612 21:41:11 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.612 21:41:11 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.612 21:41:11 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.613 21:41:11 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.915 21:41:11 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:52.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.915 --rc genhtml_branch_coverage=1 00:09:52.915 --rc genhtml_function_coverage=1 00:09:52.915 --rc genhtml_legend=1 00:09:52.915 --rc geninfo_all_blocks=1 00:09:52.915 --rc geninfo_unexecuted_blocks=1 00:09:52.915 00:09:52.915 ' 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:52.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.915 --rc genhtml_branch_coverage=1 00:09:52.915 --rc genhtml_function_coverage=1 00:09:52.915 --rc genhtml_legend=1 00:09:52.915 --rc geninfo_all_blocks=1 00:09:52.915 --rc geninfo_unexecuted_blocks=1 00:09:52.915 00:09:52.915 ' 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:52.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.915 --rc genhtml_branch_coverage=1 00:09:52.915 --rc genhtml_function_coverage=1 00:09:52.915 --rc genhtml_legend=1 00:09:52.915 --rc geninfo_all_blocks=1 00:09:52.915 --rc geninfo_unexecuted_blocks=1 00:09:52.915 00:09:52.915 ' 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:52.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.915 --rc genhtml_branch_coverage=1 00:09:52.915 --rc genhtml_function_coverage=1 00:09:52.915 --rc genhtml_legend=1 00:09:52.915 --rc geninfo_all_blocks=1 00:09:52.915 --rc geninfo_unexecuted_blocks=1 00:09:52.915 00:09:52.915 ' 00:09:52.915 21:41:11 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.915 21:41:11 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:52.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.915 21:41:11 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:52.915 21:41:11 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66125 00:09:52.915 21:41:11 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:52.915 21:41:11 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66125 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 66125 ']' 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.915 21:41:11 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.915 21:41:11 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:52.915 [2024-09-29 21:41:11.736203] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:52.915 [2024-09-29 21:41:11.736328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66125 ] 00:09:53.181 [2024-09-29 21:41:11.886807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:53.181 [2024-09-29 21:41:12.076643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.181 [2024-09-29 21:41:12.076835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.759 21:41:12 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.759 21:41:12 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:53.759 21:41:12 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:54.017 Nvme0n1 00:09:54.017 21:41:12 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:54.017 21:41:12 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:54.275 request: 00:09:54.275 { 00:09:54.275 "bdev_name": "Nvme0n1", 00:09:54.275 "filename": "non_existing_file", 00:09:54.275 "method": "bdev_nvme_apply_firmware", 00:09:54.275 "req_id": 1 00:09:54.275 } 00:09:54.275 Got JSON-RPC error response 00:09:54.275 response: 00:09:54.275 { 00:09:54.275 "code": -32603, 00:09:54.275 "message": "open file failed." 00:09:54.275 } 00:09:54.275 21:41:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:54.275 21:41:13 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:54.275 21:41:13 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:54.275 21:41:13 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:54.275 21:41:13 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66125 00:09:54.275 21:41:13 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 66125 ']' 00:09:54.275 21:41:13 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 66125 00:09:54.275 21:41:13 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:54.275 21:41:13 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.275 21:41:13 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66125 00:09:54.533 killing process with pid 66125 00:09:54.533 21:41:13 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.533 21:41:13 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.533 21:41:13 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66125' 00:09:54.533 21:41:13 nvme_rpc -- common/autotest_common.sh@969 -- # kill 66125 00:09:54.533 21:41:13 nvme_rpc -- common/autotest_common.sh@974 -- # wait 66125 00:09:55.907 ************************************ 00:09:55.907 END TEST nvme_rpc 00:09:55.907 ************************************ 00:09:55.907 00:09:55.907 real 0m3.012s 00:09:55.907 user 0m5.532s 00:09:55.907 sys 0m0.477s 00:09:55.907 21:41:14 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.907 21:41:14 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.907 21:41:14 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:55.907 21:41:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:55.907 21:41:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.907 21:41:14 -- common/autotest_common.sh@10 -- # set +x 00:09:55.907 ************************************ 00:09:55.907 START TEST nvme_rpc_timeouts 00:09:55.907 ************************************ 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:55.907 * Looking for test storage... 00:09:55.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.907 21:41:14 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:55.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.907 --rc genhtml_branch_coverage=1 00:09:55.907 --rc genhtml_function_coverage=1 00:09:55.907 --rc genhtml_legend=1 00:09:55.907 --rc geninfo_all_blocks=1 00:09:55.907 --rc geninfo_unexecuted_blocks=1 00:09:55.907 00:09:55.907 ' 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:55.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.907 --rc genhtml_branch_coverage=1 00:09:55.907 --rc genhtml_function_coverage=1 00:09:55.907 --rc genhtml_legend=1 00:09:55.907 --rc geninfo_all_blocks=1 00:09:55.907 --rc geninfo_unexecuted_blocks=1 00:09:55.907 00:09:55.907 ' 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:55.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.907 --rc genhtml_branch_coverage=1 00:09:55.907 --rc genhtml_function_coverage=1 00:09:55.907 --rc genhtml_legend=1 00:09:55.907 --rc geninfo_all_blocks=1 00:09:55.907 --rc geninfo_unexecuted_blocks=1 00:09:55.907 00:09:55.907 ' 00:09:55.907 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:55.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.907 --rc genhtml_branch_coverage=1 00:09:55.907 --rc genhtml_function_coverage=1 00:09:55.907 --rc genhtml_legend=1 00:09:55.907 --rc geninfo_all_blocks=1 00:09:55.907 --rc geninfo_unexecuted_blocks=1 00:09:55.907 00:09:55.907 ' 00:09:55.907 21:41:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.907 21:41:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66184 00:09:55.907 21:41:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66184 00:09:55.907 21:41:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66222 00:09:55.907 21:41:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:55.907 21:41:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:55.908 21:41:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66222 00:09:55.908 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 66222 ']' 00:09:55.908 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.908 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:55.908 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.908 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:55.908 21:41:14 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:55.908 [2024-09-29 21:41:14.729301] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:55.908 [2024-09-29 21:41:14.729440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66222 ] 00:09:55.908 [2024-09-29 21:41:14.877317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.165 [2024-09-29 21:41:15.026835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.165 [2024-09-29 21:41:15.026933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.729 Checking default timeout settings: 00:09:56.729 21:41:15 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.729 21:41:15 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:56.729 21:41:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:56.729 21:41:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:56.989 Making settings changes with rpc: 00:09:56.989 21:41:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:56.989 21:41:15 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:57.249 Check default vs. modified settings: 00:09:57.249 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:57.250 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66184 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66184 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:57.510 Setting action_on_timeout is changed as expected. 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66184 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66184 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:57.510 Setting timeout_us is changed as expected. 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66184 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66184 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:57.510 Setting timeout_admin_us is changed as expected. 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:57.510 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:57.511 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:57.511 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66184 /tmp/settings_modified_66184 00:09:57.511 21:41:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66222 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 66222 ']' 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 66222 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66222 00:09:57.511 killing process with pid 66222 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66222' 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 66222 00:09:57.511 21:41:16 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 66222 00:09:58.893 RPC TIMEOUT SETTING TEST PASSED. 00:09:58.893 21:41:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:58.893 00:09:58.893 real 0m3.172s 00:09:58.893 user 0m6.032s 00:09:58.893 sys 0m0.484s 00:09:58.893 ************************************ 00:09:58.893 END TEST nvme_rpc_timeouts 00:09:58.893 ************************************ 00:09:58.893 21:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.893 21:41:17 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:58.893 21:41:17 -- spdk/autotest.sh@239 -- # uname -s 00:09:58.893 21:41:17 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:58.893 21:41:17 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:58.893 21:41:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:58.893 21:41:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.893 21:41:17 -- common/autotest_common.sh@10 -- # set +x 00:09:58.893 ************************************ 00:09:58.893 START TEST sw_hotplug 00:09:58.893 ************************************ 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:58.893 * Looking for test storage... 00:09:58.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.893 21:41:17 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:58.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.893 --rc genhtml_branch_coverage=1 00:09:58.893 --rc genhtml_function_coverage=1 00:09:58.893 --rc genhtml_legend=1 00:09:58.893 --rc geninfo_all_blocks=1 00:09:58.893 --rc geninfo_unexecuted_blocks=1 00:09:58.893 00:09:58.893 ' 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:58.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.893 --rc genhtml_branch_coverage=1 00:09:58.893 --rc genhtml_function_coverage=1 00:09:58.893 --rc genhtml_legend=1 00:09:58.893 --rc geninfo_all_blocks=1 00:09:58.893 --rc geninfo_unexecuted_blocks=1 00:09:58.893 00:09:58.893 ' 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:58.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.893 --rc genhtml_branch_coverage=1 00:09:58.893 --rc genhtml_function_coverage=1 00:09:58.893 --rc genhtml_legend=1 00:09:58.893 --rc geninfo_all_blocks=1 00:09:58.893 --rc geninfo_unexecuted_blocks=1 00:09:58.893 00:09:58.893 ' 00:09:58.893 21:41:17 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:58.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.893 --rc genhtml_branch_coverage=1 00:09:58.893 --rc genhtml_function_coverage=1 00:09:58.893 --rc genhtml_legend=1 00:09:58.893 --rc geninfo_all_blocks=1 00:09:58.893 --rc geninfo_unexecuted_blocks=1 00:09:58.893 00:09:58.893 ' 00:09:58.894 21:41:17 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:59.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.461 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:59.462 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:59.462 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:59.462 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:59.462 21:41:18 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:59.462 21:41:18 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:59.462 21:41:18 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:59.462 21:41:18 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:59.462 21:41:18 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:59.462 21:41:18 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:59.462 21:41:18 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:59.462 21:41:18 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:59.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.978 Waiting for block devices as requested 00:09:59.978 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.978 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.978 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.236 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:05.519 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:05.519 21:41:24 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:05.519 21:41:24 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:05.519 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:05.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.519 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:05.780 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:06.042 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.042 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:06.042 21:41:24 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:06.042 21:41:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:06.042 21:41:24 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67073 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:06.042 21:41:25 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:06.042 21:41:25 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:06.042 21:41:25 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:06.042 21:41:25 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:06.042 21:41:25 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:06.042 21:41:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:06.302 Initializing NVMe Controllers 00:10:06.303 Attaching to 0000:00:10.0 00:10:06.303 Attaching to 0000:00:11.0 00:10:06.303 Attached to 0000:00:10.0 00:10:06.303 Attached to 0000:00:11.0 00:10:06.303 Initialization complete. Starting I/O... 00:10:06.303 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:06.303 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:06.303 00:10:07.246 QEMU NVMe Ctrl (12340 ): 2915 I/Os completed (+2915) 00:10:07.246 QEMU NVMe Ctrl (12341 ): 2888 I/Os completed (+2888) 00:10:07.246 00:10:08.631 QEMU NVMe Ctrl (12340 ): 6554 I/Os completed (+3639) 00:10:08.631 QEMU NVMe Ctrl (12341 ): 6517 I/Os completed (+3629) 00:10:08.631 00:10:09.226 QEMU NVMe Ctrl (12340 ): 10270 I/Os completed (+3716) 00:10:09.226 QEMU NVMe Ctrl (12341 ): 10227 I/Os completed (+3710) 00:10:09.226 00:10:10.610 QEMU NVMe Ctrl (12340 ): 13822 I/Os completed (+3552) 00:10:10.610 QEMU NVMe Ctrl (12341 ): 13770 I/Os completed (+3543) 00:10:10.610 00:10:11.554 QEMU NVMe Ctrl (12340 ): 17339 I/Os completed (+3517) 00:10:11.554 QEMU NVMe Ctrl (12341 ): 17315 I/Os completed (+3545) 00:10:11.554 00:10:12.126 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.126 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.126 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.126 [2024-09-29 21:41:31.011900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:12.126 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:12.126 [2024-09-29 21:41:31.013106] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 [2024-09-29 21:41:31.013158] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 [2024-09-29 21:41:31.013176] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 [2024-09-29 21:41:31.013195] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.126 [2024-09-29 21:41:31.015038] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 [2024-09-29 21:41:31.015083] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 [2024-09-29 21:41:31.015098] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 [2024-09-29 21:41:31.015112] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.126 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.126 [2024-09-29 21:41:31.032111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:12.126 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:12.126 [2024-09-29 21:41:31.033166] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 [2024-09-29 21:41:31.033205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 [2024-09-29 21:41:31.033226] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 [2024-09-29 21:41:31.033244] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.126 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.126 [2024-09-29 21:41:31.034910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.127 [2024-09-29 21:41:31.034944] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.127 [2024-09-29 21:41:31.034959] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.127 [2024-09-29 21:41:31.034972] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.127 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:12.127 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:12.127 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:12.127 EAL: Scan for (pci) bus failed. 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.389 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.389 Attaching to 0000:00:10.0 00:10:12.389 Attached to 0000:00:10.0 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.389 21:41:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:12.389 Attaching to 0000:00:11.0 00:10:12.389 Attached to 0000:00:11.0 00:10:13.333 QEMU NVMe Ctrl (12340 ): 2979 I/Os completed (+2979) 00:10:13.333 QEMU NVMe Ctrl (12341 ): 2731 I/Os completed (+2731) 00:10:13.333 00:10:14.277 QEMU NVMe Ctrl (12340 ): 6099 I/Os completed (+3120) 00:10:14.277 QEMU NVMe Ctrl (12341 ): 5730 I/Os completed (+2999) 00:10:14.277 00:10:15.214 QEMU NVMe Ctrl (12340 ): 9489 I/Os completed (+3390) 00:10:15.214 QEMU NVMe Ctrl (12341 ): 9031 I/Os completed (+3301) 00:10:15.214 00:10:16.589 QEMU NVMe Ctrl (12340 ): 12743 I/Os completed (+3254) 00:10:16.589 QEMU NVMe Ctrl (12341 ): 12243 I/Os completed (+3212) 00:10:16.589 00:10:17.523 QEMU NVMe Ctrl (12340 ): 16451 I/Os completed (+3708) 00:10:17.523 QEMU NVMe Ctrl (12341 ): 16006 I/Os completed (+3763) 00:10:17.523 00:10:18.456 QEMU NVMe Ctrl (12340 ): 20128 I/Os completed (+3677) 00:10:18.456 QEMU NVMe Ctrl (12341 ): 19709 I/Os completed (+3703) 00:10:18.456 00:10:19.397 QEMU NVMe Ctrl (12340 ): 23793 I/Os completed (+3665) 00:10:19.397 QEMU NVMe Ctrl (12341 ): 23359 I/Os completed (+3650) 00:10:19.397 00:10:20.335 QEMU NVMe Ctrl (12340 ): 27737 I/Os completed (+3944) 00:10:20.335 QEMU NVMe Ctrl (12341 ): 27303 I/Os completed (+3944) 00:10:20.335 00:10:21.274 QEMU NVMe Ctrl (12340 ): 30995 I/Os completed (+3258) 00:10:21.274 QEMU NVMe Ctrl (12341 ): 30460 I/Os completed (+3157) 00:10:21.274 00:10:22.649 QEMU NVMe Ctrl (12340 ): 34535 I/Os completed (+3540) 00:10:22.649 QEMU NVMe Ctrl (12341 ): 34031 I/Os completed (+3571) 00:10:22.649 00:10:23.219 QEMU NVMe Ctrl (12340 ): 38075 I/Os completed (+3540) 00:10:23.219 QEMU NVMe Ctrl (12341 ): 37305 I/Os completed (+3274) 00:10:23.219 00:10:24.603 QEMU NVMe Ctrl (12340 ): 41025 I/Os completed (+2950) 00:10:24.603 QEMU NVMe Ctrl (12341 ): 40304 I/Os completed (+2999) 00:10:24.603 00:10:24.603 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:24.603 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.603 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.603 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.603 [2024-09-29 21:41:43.321458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:24.603 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:24.603 [2024-09-29 21:41:43.322981] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.603 [2024-09-29 21:41:43.323144] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.603 [2024-09-29 21:41:43.323184] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.603 [2024-09-29 21:41:43.323356] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.603 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.603 [2024-09-29 21:41:43.325429] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.603 [2024-09-29 21:41:43.325609] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.603 [2024-09-29 21:41:43.325649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.603 [2024-09-29 21:41:43.325713] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.603 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.603 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.604 [2024-09-29 21:41:43.345480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:24.604 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:24.604 [2024-09-29 21:41:43.346773] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.604 [2024-09-29 21:41:43.346905] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.604 [2024-09-29 21:41:43.346982] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.604 [2024-09-29 21:41:43.347019] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.604 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.604 [2024-09-29 21:41:43.351199] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.604 [2024-09-29 21:41:43.351322] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.604 [2024-09-29 21:41:43.351421] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.604 [2024-09-29 21:41:43.351457] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.604 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.604 Attaching to 0000:00:10.0 00:10:24.604 Attached to 0000:00:10.0 00:10:24.863 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.863 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.863 21:41:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.863 Attaching to 0000:00:11.0 00:10:24.863 Attached to 0000:00:11.0 00:10:25.434 QEMU NVMe Ctrl (12340 ): 2264 I/Os completed (+2264) 00:10:25.434 QEMU NVMe Ctrl (12341 ): 1874 I/Os completed (+1874) 00:10:25.434 00:10:26.374 QEMU NVMe Ctrl (12340 ): 5471 I/Os completed (+3207) 00:10:26.374 QEMU NVMe Ctrl (12341 ): 5040 I/Os completed (+3166) 00:10:26.374 00:10:27.316 QEMU NVMe Ctrl (12340 ): 8808 I/Os completed (+3337) 00:10:27.316 QEMU NVMe Ctrl (12341 ): 8205 I/Os completed (+3165) 00:10:27.316 00:10:28.292 QEMU NVMe Ctrl (12340 ): 12111 I/Os completed (+3303) 00:10:28.292 QEMU NVMe Ctrl (12341 ): 11549 I/Os completed (+3344) 00:10:28.292 00:10:29.234 QEMU NVMe Ctrl (12340 ): 15429 I/Os completed (+3318) 00:10:29.234 QEMU NVMe Ctrl (12341 ): 14637 I/Os completed (+3088) 00:10:29.234 00:10:30.625 QEMU NVMe Ctrl (12340 ): 18621 I/Os completed (+3192) 00:10:30.625 QEMU NVMe Ctrl (12341 ): 17617 I/Os completed (+2980) 00:10:30.625 00:10:31.561 QEMU NVMe Ctrl (12340 ): 22352 I/Os completed (+3731) 00:10:31.561 QEMU NVMe Ctrl (12341 ): 21028 I/Os completed (+3411) 00:10:31.561 00:10:32.496 QEMU NVMe Ctrl (12340 ): 25821 I/Os completed (+3469) 00:10:32.496 QEMU NVMe Ctrl (12341 ): 24625 I/Os completed (+3597) 00:10:32.496 00:10:33.437 QEMU NVMe Ctrl (12340 ): 29010 I/Os completed (+3189) 00:10:33.437 QEMU NVMe Ctrl (12341 ): 27834 I/Os completed (+3209) 00:10:33.437 00:10:34.376 QEMU NVMe Ctrl (12340 ): 32584 I/Os completed (+3574) 00:10:34.376 QEMU NVMe Ctrl (12341 ): 31219 I/Os completed (+3385) 00:10:34.376 00:10:35.310 QEMU NVMe Ctrl (12340 ): 36141 I/Os completed (+3557) 00:10:35.310 QEMU NVMe Ctrl (12341 ): 34873 I/Os completed (+3654) 00:10:35.310 00:10:36.246 QEMU NVMe Ctrl (12340 ): 39692 I/Os completed (+3551) 00:10:36.246 QEMU NVMe Ctrl (12341 ): 38411 I/Os completed (+3538) 00:10:36.246 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.814 [2024-09-29 21:41:55.623089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:36.814 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:36.814 [2024-09-29 21:41:55.624538] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.624595] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.624615] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.624633] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:36.814 [2024-09-29 21:41:55.626623] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.626674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.626691] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.626707] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.814 [2024-09-29 21:41:55.647526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:36.814 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:36.814 [2024-09-29 21:41:55.648687] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.648865] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.648932] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.648967] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:36.814 [2024-09-29 21:41:55.650844] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.650884] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.650903] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 [2024-09-29 21:41:55.650917] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:36.814 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:37.075 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.075 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:37.075 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:37.075 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:37.075 Attaching to 0000:00:10.0 00:10:37.075 Attached to 0000:00:10.0 00:10:37.075 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:37.075 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:37.075 21:41:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:37.075 Attaching to 0000:00:11.0 00:10:37.075 Attached to 0000:00:11.0 00:10:37.075 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:37.075 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:37.075 [2024-09-29 21:41:55.888802] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:49.303 21:42:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:49.303 21:42:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:49.303 21:42:07 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.87 00:10:49.303 21:42:07 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.87 00:10:49.303 21:42:07 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:49.303 21:42:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.87 00:10:49.303 21:42:07 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.87 2 00:10:49.303 remove_attach_helper took 42.87s to complete (handling 2 nvme drive(s)) 21:42:07 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:55.910 21:42:13 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67073 00:10:55.910 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67073) - No such process 00:10:55.910 21:42:13 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67073 00:10:55.910 21:42:13 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:55.910 21:42:13 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:55.910 21:42:13 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:55.910 21:42:13 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67623 00:10:55.910 21:42:13 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:55.910 21:42:13 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:55.910 21:42:13 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67623 00:10:55.910 21:42:13 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67623 ']' 00:10:55.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.910 21:42:13 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.910 21:42:13 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.910 21:42:13 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.910 21:42:13 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.910 21:42:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.910 [2024-09-29 21:42:13.972571] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:55.910 [2024-09-29 21:42:13.972851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67623 ] 00:10:55.910 [2024-09-29 21:42:14.123985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.910 [2024-09-29 21:42:14.321896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:56.171 21:42:14 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.171 21:42:14 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:56.171 21:42:14 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:56.171 21:42:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:56.171 21:42:14 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:56.171 21:42:14 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:56.171 21:42:14 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:56.171 21:42:14 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:56.171 21:42:14 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:56.171 21:42:14 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.732 21:42:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.732 21:42:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.732 21:42:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.732 21:42:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.732 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:02.732 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:02.732 [2024-09-29 21:42:21.051775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:02.732 [2024-09-29 21:42:21.053054] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.732 [2024-09-29 21:42:21.053217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.732 [2024-09-29 21:42:21.053236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.732 [2024-09-29 21:42:21.053256] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.732 [2024-09-29 21:42:21.053265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.732 [2024-09-29 21:42:21.053274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.732 [2024-09-29 21:42:21.053281] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.732 [2024-09-29 21:42:21.053290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.732 [2024-09-29 21:42:21.053297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.732 [2024-09-29 21:42:21.053310] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.732 [2024-09-29 21:42:21.053316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.732 [2024-09-29 21:42:21.053324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.732 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:02.732 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.732 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.732 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.732 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.732 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.732 21:42:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.732 21:42:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.732 [2024-09-29 21:42:21.551764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:02.732 [2024-09-29 21:42:21.553133] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.732 [2024-09-29 21:42:21.553161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.732 [2024-09-29 21:42:21.553172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.732 [2024-09-29 21:42:21.553184] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.732 [2024-09-29 21:42:21.553194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.732 [2024-09-29 21:42:21.553201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.732 [2024-09-29 21:42:21.553210] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.732 [2024-09-29 21:42:21.553217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.732 [2024-09-29 21:42:21.553226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.732 [2024-09-29 21:42:21.553233] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.733 [2024-09-29 21:42:21.553241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.733 [2024-09-29 21:42:21.553247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.733 21:42:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.733 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:02.733 21:42:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.298 21:42:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.298 21:42:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.298 21:42:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.298 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:03.556 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:03.556 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.556 21:42:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.794 21:42:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.794 21:42:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.794 21:42:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.794 21:42:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.794 21:42:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.794 21:42:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.794 [2024-09-29 21:42:34.451976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:15.794 [2024-09-29 21:42:34.453300] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.794 [2024-09-29 21:42:34.453336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.794 [2024-09-29 21:42:34.453348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.794 [2024-09-29 21:42:34.453367] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.794 [2024-09-29 21:42:34.453375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.794 [2024-09-29 21:42:34.453383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.794 [2024-09-29 21:42:34.453402] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.794 [2024-09-29 21:42:34.453410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.794 [2024-09-29 21:42:34.453417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.794 [2024-09-29 21:42:34.453426] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.794 [2024-09-29 21:42:34.453433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.794 [2024-09-29 21:42:34.453441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:15.794 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:16.052 [2024-09-29 21:42:34.951968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:16.052 [2024-09-29 21:42:34.953155] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.052 [2024-09-29 21:42:34.953187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.052 [2024-09-29 21:42:34.953201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.052 [2024-09-29 21:42:34.953216] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.053 [2024-09-29 21:42:34.953225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.053 [2024-09-29 21:42:34.953233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.053 [2024-09-29 21:42:34.953243] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.053 [2024-09-29 21:42:34.953250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.053 [2024-09-29 21:42:34.953259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.053 [2024-09-29 21:42:34.953266] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.053 [2024-09-29 21:42:34.953274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.053 [2024-09-29 21:42:34.953280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.053 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:16.053 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.053 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.053 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.053 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.053 21:42:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.053 21:42:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.053 21:42:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.053 21:42:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.053 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:16.053 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:16.311 21:42:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.511 21:42:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.511 21:42:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.511 21:42:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.511 21:42:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.511 21:42:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.511 21:42:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:28.511 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:28.511 [2024-09-29 21:42:47.352181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:28.511 [2024-09-29 21:42:47.353455] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.511 [2024-09-29 21:42:47.353489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.511 [2024-09-29 21:42:47.353502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.511 [2024-09-29 21:42:47.353524] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.511 [2024-09-29 21:42:47.353532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.511 [2024-09-29 21:42:47.353543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.511 [2024-09-29 21:42:47.353551] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.512 [2024-09-29 21:42:47.353559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.512 [2024-09-29 21:42:47.353566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.512 [2024-09-29 21:42:47.353576] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.512 [2024-09-29 21:42:47.353582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.512 [2024-09-29 21:42:47.353597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.770 [2024-09-29 21:42:47.752173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:28.770 [2024-09-29 21:42:47.753346] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.770 [2024-09-29 21:42:47.753375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.770 [2024-09-29 21:42:47.753398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.770 [2024-09-29 21:42:47.753410] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.770 [2024-09-29 21:42:47.753420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.770 [2024-09-29 21:42:47.753427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.770 [2024-09-29 21:42:47.753438] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:28.770 [2024-09-29 21:42:47.753445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.028 [2024-09-29 21:42:47.753455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.028 [2024-09-29 21:42:47.753464] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:29.028 [2024-09-29 21:42:47.753472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:29.028 [2024-09-29 21:42:47.753479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.028 21:42:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.028 21:42:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.028 21:42:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.028 21:42:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:29.285 21:42:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:29.285 21:42:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.285 21:42:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:29.285 21:42:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:29.285 21:42:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:29.285 21:42:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:29.285 21:42:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:29.285 21:42:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.489 21:43:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.489 21:43:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.489 21:43:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:41.489 21:43:00 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.26 00:11:41.489 21:43:00 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.26 00:11:41.489 21:43:00 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.26 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.26 2 00:11:41.489 remove_attach_helper took 45.26s to complete (handling 2 nvme drive(s)) 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:41.489 21:43:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.489 21:43:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.489 21:43:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.489 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:41.490 21:43:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.490 21:43:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.490 21:43:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.490 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:41.490 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:41.490 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:41.490 21:43:00 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:41.490 21:43:00 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:41.490 21:43:00 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:41.490 21:43:00 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:41.490 21:43:00 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:41.490 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:41.490 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:41.490 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:41.490 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:41.490 21:43:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.049 21:43:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.049 21:43:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.049 21:43:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:48.049 [2024-09-29 21:43:06.340238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:48.049 [2024-09-29 21:43:06.341565] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.049 [2024-09-29 21:43:06.341601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.049 [2024-09-29 21:43:06.341614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.049 [2024-09-29 21:43:06.341635] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.049 [2024-09-29 21:43:06.341643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.049 [2024-09-29 21:43:06.341652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.049 [2024-09-29 21:43:06.341660] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.049 [2024-09-29 21:43:06.341668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.049 [2024-09-29 21:43:06.341674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.049 [2024-09-29 21:43:06.341684] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.049 [2024-09-29 21:43:06.341692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.049 [2024-09-29 21:43:06.341703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.049 [2024-09-29 21:43:06.740223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:48.049 [2024-09-29 21:43:06.743262] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.049 [2024-09-29 21:43:06.743436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.049 [2024-09-29 21:43:06.743456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.049 [2024-09-29 21:43:06.743474] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.049 [2024-09-29 21:43:06.743483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.049 [2024-09-29 21:43:06.743491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.049 [2024-09-29 21:43:06.743501] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.049 [2024-09-29 21:43:06.743508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.049 [2024-09-29 21:43:06.743517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.049 [2024-09-29 21:43:06.743525] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.049 [2024-09-29 21:43:06.743533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.049 [2024-09-29 21:43:06.743539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.049 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.050 21:43:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.050 21:43:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.050 21:43:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.050 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:48.050 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.050 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.050 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.050 21:43:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.308 21:43:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:48.308 21:43:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.308 21:43:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.308 21:43:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.308 21:43:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:48.308 21:43:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:48.308 21:43:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.308 21:43:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.511 21:43:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.511 21:43:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.511 21:43:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.511 21:43:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.511 21:43:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.511 [2024-09-29 21:43:19.240956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:00.511 [2024-09-29 21:43:19.242052] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.511 [2024-09-29 21:43:19.242086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.511 [2024-09-29 21:43:19.242097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.511 [2024-09-29 21:43:19.242117] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.511 [2024-09-29 21:43:19.242125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.511 [2024-09-29 21:43:19.242133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.511 [2024-09-29 21:43:19.242141] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.511 [2024-09-29 21:43:19.242149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.511 [2024-09-29 21:43:19.242156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.511 [2024-09-29 21:43:19.242166] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.511 [2024-09-29 21:43:19.242173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.511 [2024-09-29 21:43:19.242182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.511 21:43:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:00.511 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.769 [2024-09-29 21:43:19.740951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:00.769 [2024-09-29 21:43:19.742153] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.769 [2024-09-29 21:43:19.742185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.769 [2024-09-29 21:43:19.742197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.769 [2024-09-29 21:43:19.742208] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.769 [2024-09-29 21:43:19.742219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.769 [2024-09-29 21:43:19.742226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.769 [2024-09-29 21:43:19.742236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.769 [2024-09-29 21:43:19.742243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.769 [2024-09-29 21:43:19.742251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.769 [2024-09-29 21:43:19.742259] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.769 [2024-09-29 21:43:19.742267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.770 [2024-09-29 21:43:19.742274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.028 21:43:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.028 21:43:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.028 21:43:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.028 21:43:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:01.028 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:01.287 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.287 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.287 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.287 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:01.287 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:01.287 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.287 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.490 21:43:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.490 21:43:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.490 21:43:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.490 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.491 [2024-09-29 21:43:32.141171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:13.491 [2024-09-29 21:43:32.142335] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.491 [2024-09-29 21:43:32.142382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.491 [2024-09-29 21:43:32.142408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.491 [2024-09-29 21:43:32.142429] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.491 [2024-09-29 21:43:32.142438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.491 [2024-09-29 21:43:32.142447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.491 [2024-09-29 21:43:32.142454] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.491 [2024-09-29 21:43:32.142465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.491 [2024-09-29 21:43:32.142471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.491 [2024-09-29 21:43:32.142482] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.491 [2024-09-29 21:43:32.142489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.491 [2024-09-29 21:43:32.142499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.491 21:43:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.491 21:43:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.491 21:43:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:13.491 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:13.749 [2024-09-29 21:43:32.541163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:13.749 [2024-09-29 21:43:32.542423] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.749 [2024-09-29 21:43:32.542451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.749 [2024-09-29 21:43:32.542465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.749 [2024-09-29 21:43:32.542476] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.749 [2024-09-29 21:43:32.542485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.749 [2024-09-29 21:43:32.542493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.749 [2024-09-29 21:43:32.542501] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.749 [2024-09-29 21:43:32.542509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.749 [2024-09-29 21:43:32.542517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.749 [2024-09-29 21:43:32.542525] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.749 [2024-09-29 21:43:32.542535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.749 [2024-09-29 21:43:32.542541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.749 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:13.749 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.749 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.749 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.749 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.749 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.749 21:43:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.749 21:43:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.749 21:43:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.006 21:43:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:14.264 21:43:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:14.264 21:43:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.264 21:43:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.81 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.81 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.81 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.81 2 00:12:26.515 remove_attach_helper took 44.81s to complete (handling 2 nvme drive(s)) 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:26.515 21:43:45 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67623 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67623 ']' 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67623 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67623 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:26.515 killing process with pid 67623 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67623' 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67623 00:12:26.515 21:43:45 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67623 00:12:27.895 21:43:46 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:27.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:28.468 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:28.468 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:28.468 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:28.468 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:28.468 00:12:28.468 real 2m29.684s 00:12:28.468 user 1m52.064s 00:12:28.468 sys 0m16.338s 00:12:28.468 21:43:47 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.468 ************************************ 00:12:28.468 21:43:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.468 END TEST sw_hotplug 00:12:28.468 ************************************ 00:12:28.730 21:43:47 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:28.730 21:43:47 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:28.730 21:43:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:28.730 21:43:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.730 21:43:47 -- common/autotest_common.sh@10 -- # set +x 00:12:28.730 ************************************ 00:12:28.730 START TEST nvme_xnvme 00:12:28.730 ************************************ 00:12:28.730 21:43:47 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:28.730 * Looking for test storage... 00:12:28.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:28.730 21:43:47 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:28.730 21:43:47 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:28.730 21:43:47 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:28.730 21:43:47 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.730 21:43:47 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:28.731 21:43:47 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.731 21:43:47 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.731 --rc genhtml_branch_coverage=1 00:12:28.731 --rc genhtml_function_coverage=1 00:12:28.731 --rc genhtml_legend=1 00:12:28.731 --rc geninfo_all_blocks=1 00:12:28.731 --rc geninfo_unexecuted_blocks=1 00:12:28.731 00:12:28.731 ' 00:12:28.731 21:43:47 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.731 --rc genhtml_branch_coverage=1 00:12:28.731 --rc genhtml_function_coverage=1 00:12:28.731 --rc genhtml_legend=1 00:12:28.731 --rc geninfo_all_blocks=1 00:12:28.731 --rc geninfo_unexecuted_blocks=1 00:12:28.731 00:12:28.731 ' 00:12:28.731 21:43:47 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.731 --rc genhtml_branch_coverage=1 00:12:28.731 --rc genhtml_function_coverage=1 00:12:28.731 --rc genhtml_legend=1 00:12:28.731 --rc geninfo_all_blocks=1 00:12:28.731 --rc geninfo_unexecuted_blocks=1 00:12:28.731 00:12:28.731 ' 00:12:28.731 21:43:47 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:28.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.731 --rc genhtml_branch_coverage=1 00:12:28.731 --rc genhtml_function_coverage=1 00:12:28.731 --rc genhtml_legend=1 00:12:28.731 --rc geninfo_all_blocks=1 00:12:28.731 --rc geninfo_unexecuted_blocks=1 00:12:28.731 00:12:28.731 ' 00:12:28.731 21:43:47 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:28.731 21:43:47 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.731 21:43:47 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.731 21:43:47 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.731 21:43:47 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.731 21:43:47 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.731 21:43:47 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.731 21:43:47 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.731 21:43:47 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:28.731 21:43:47 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.731 21:43:47 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:28.731 21:43:47 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:28.731 21:43:47 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.731 21:43:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.731 ************************************ 00:12:28.731 START TEST xnvme_to_malloc_dd_copy 00:12:28.731 ************************************ 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:28.731 21:43:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:28.993 { 00:12:28.993 "subsystems": [ 00:12:28.993 { 00:12:28.993 "subsystem": "bdev", 00:12:28.993 "config": [ 00:12:28.993 { 00:12:28.993 "params": { 00:12:28.993 "block_size": 512, 00:12:28.993 "num_blocks": 2097152, 00:12:28.993 "name": "malloc0" 00:12:28.993 }, 00:12:28.993 "method": "bdev_malloc_create" 00:12:28.993 }, 00:12:28.993 { 00:12:28.993 "params": { 00:12:28.993 "io_mechanism": "libaio", 00:12:28.993 "filename": "/dev/nullb0", 00:12:28.993 "name": "null0" 00:12:28.993 }, 00:12:28.993 "method": "bdev_xnvme_create" 00:12:28.993 }, 00:12:28.993 { 00:12:28.993 "method": "bdev_wait_for_examine" 00:12:28.993 } 00:12:28.993 ] 00:12:28.993 } 00:12:28.993 ] 00:12:28.993 } 00:12:28.993 [2024-09-29 21:43:47.761238] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:28.993 [2024-09-29 21:43:47.761382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68992 ] 00:12:28.993 [2024-09-29 21:43:47.917085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.254 [2024-09-29 21:43:48.167421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.179  Copying: 221/1024 [MB] (221 MBps) Copying: 442/1024 [MB] (221 MBps) Copying: 729/1024 [MB] (287 MBps) Copying: 1024/1024 [MB] (average 256 MBps) 00:12:37.179 00:12:37.179 21:43:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:37.179 21:43:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:37.179 21:43:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:37.179 21:43:55 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:37.179 { 00:12:37.179 "subsystems": [ 00:12:37.179 { 00:12:37.179 "subsystem": "bdev", 00:12:37.179 "config": [ 00:12:37.179 { 00:12:37.179 "params": { 00:12:37.179 "block_size": 512, 00:12:37.179 "num_blocks": 2097152, 00:12:37.179 "name": "malloc0" 00:12:37.179 }, 00:12:37.179 "method": "bdev_malloc_create" 00:12:37.179 }, 00:12:37.179 { 00:12:37.179 "params": { 00:12:37.179 "io_mechanism": "libaio", 00:12:37.179 "filename": "/dev/nullb0", 00:12:37.179 "name": "null0" 00:12:37.180 }, 00:12:37.180 "method": "bdev_xnvme_create" 00:12:37.180 }, 00:12:37.180 { 00:12:37.180 "method": "bdev_wait_for_examine" 00:12:37.180 } 00:12:37.180 ] 00:12:37.180 } 00:12:37.180 ] 00:12:37.180 } 00:12:37.180 [2024-09-29 21:43:55.637463] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:37.180 [2024-09-29 21:43:55.638023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69090 ] 00:12:37.180 [2024-09-29 21:43:55.786024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.180 [2024-09-29 21:43:55.956459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.412  Copying: 300/1024 [MB] (300 MBps) Copying: 602/1024 [MB] (301 MBps) Copying: 903/1024 [MB] (301 MBps) Copying: 1024/1024 [MB] (average 301 MBps) 00:12:43.412 00:12:43.412 21:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:43.412 21:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:43.412 21:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:43.412 21:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:43.412 21:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:43.412 21:44:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:43.671 { 00:12:43.671 "subsystems": [ 00:12:43.671 { 00:12:43.671 "subsystem": "bdev", 00:12:43.671 "config": [ 00:12:43.671 { 00:12:43.671 "params": { 00:12:43.671 "block_size": 512, 00:12:43.671 "num_blocks": 2097152, 00:12:43.671 "name": "malloc0" 00:12:43.671 }, 00:12:43.671 "method": "bdev_malloc_create" 00:12:43.671 }, 00:12:43.671 { 00:12:43.671 "params": { 00:12:43.671 "io_mechanism": "io_uring", 00:12:43.671 "filename": "/dev/nullb0", 00:12:43.671 "name": "null0" 00:12:43.671 }, 00:12:43.671 "method": "bdev_xnvme_create" 00:12:43.671 }, 00:12:43.671 { 00:12:43.671 "method": "bdev_wait_for_examine" 00:12:43.671 } 00:12:43.671 ] 00:12:43.671 } 00:12:43.671 ] 00:12:43.671 } 00:12:43.671 [2024-09-29 21:44:02.422774] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:43.671 [2024-09-29 21:44:02.422915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69173 ] 00:12:43.671 [2024-09-29 21:44:02.572529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.928 [2024-09-29 21:44:02.737264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.516  Copying: 306/1024 [MB] (306 MBps) Copying: 613/1024 [MB] (306 MBps) Copying: 921/1024 [MB] (307 MBps) Copying: 1024/1024 [MB] (average 307 MBps) 00:12:50.516 00:12:50.516 21:44:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:50.516 21:44:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:50.516 21:44:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:50.516 21:44:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:50.516 { 00:12:50.516 "subsystems": [ 00:12:50.516 { 00:12:50.516 "subsystem": "bdev", 00:12:50.516 "config": [ 00:12:50.516 { 00:12:50.516 "params": { 00:12:50.516 "block_size": 512, 00:12:50.516 "num_blocks": 2097152, 00:12:50.516 "name": "malloc0" 00:12:50.516 }, 00:12:50.516 "method": "bdev_malloc_create" 00:12:50.516 }, 00:12:50.516 { 00:12:50.516 "params": { 00:12:50.517 "io_mechanism": "io_uring", 00:12:50.517 "filename": "/dev/nullb0", 00:12:50.517 "name": "null0" 00:12:50.517 }, 00:12:50.517 "method": "bdev_xnvme_create" 00:12:50.517 }, 00:12:50.517 { 00:12:50.517 "method": "bdev_wait_for_examine" 00:12:50.517 } 00:12:50.517 ] 00:12:50.517 } 00:12:50.517 ] 00:12:50.517 } 00:12:50.517 [2024-09-29 21:44:09.090893] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:50.517 [2024-09-29 21:44:09.091009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69251 ] 00:12:50.517 [2024-09-29 21:44:09.237716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.517 [2024-09-29 21:44:09.401885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.756  Copying: 310/1024 [MB] (310 MBps) Copying: 619/1024 [MB] (309 MBps) Copying: 930/1024 [MB] (310 MBps) Copying: 1024/1024 [MB] (average 310 MBps) 00:12:56.756 00:12:56.756 21:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:56.756 21:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:56.756 00:12:56.756 real 0m28.036s 00:12:56.756 user 0m24.401s 00:12:56.756 sys 0m3.107s 00:12:56.756 21:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.756 ************************************ 00:12:56.756 END TEST xnvme_to_malloc_dd_copy 00:12:56.756 ************************************ 00:12:56.756 21:44:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:57.015 21:44:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:57.015 21:44:15 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:57.015 21:44:15 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.015 21:44:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.015 ************************************ 00:12:57.015 START TEST xnvme_bdevperf 00:12:57.015 ************************************ 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:57.015 21:44:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.015 { 00:12:57.015 "subsystems": [ 00:12:57.015 { 00:12:57.015 "subsystem": "bdev", 00:12:57.015 "config": [ 00:12:57.015 { 00:12:57.015 "params": { 00:12:57.015 "io_mechanism": "libaio", 00:12:57.015 "filename": "/dev/nullb0", 00:12:57.015 "name": "null0" 00:12:57.015 }, 00:12:57.015 "method": "bdev_xnvme_create" 00:12:57.015 }, 00:12:57.015 { 00:12:57.015 "method": "bdev_wait_for_examine" 00:12:57.015 } 00:12:57.015 ] 00:12:57.015 } 00:12:57.015 ] 00:12:57.015 } 00:12:57.015 [2024-09-29 21:44:15.848342] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:57.015 [2024-09-29 21:44:15.848458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69356 ] 00:12:57.272 [2024-09-29 21:44:15.997486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.272 [2024-09-29 21:44:16.165003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.530 Running I/O for 5 seconds... 00:13:02.635 203840.00 IOPS, 796.25 MiB/s 203936.00 IOPS, 796.62 MiB/s 203882.67 IOPS, 796.42 MiB/s 203904.00 IOPS, 796.50 MiB/s 00:13:02.635 Latency(us) 00:13:02.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.635 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:02.635 null0 : 5.00 203887.36 796.43 0.00 0.00 311.57 132.33 1569.08 00:13:02.635 =================================================================================================================== 00:13:02.635 Total : 203887.36 796.43 0.00 0.00 311.57 132.33 1569.08 00:13:03.204 21:44:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:03.204 21:44:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:03.204 21:44:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:03.204 21:44:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:03.204 21:44:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:03.204 21:44:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:03.204 { 00:13:03.204 "subsystems": [ 00:13:03.204 { 00:13:03.204 "subsystem": "bdev", 00:13:03.204 "config": [ 00:13:03.204 { 00:13:03.204 "params": { 00:13:03.204 "io_mechanism": "io_uring", 00:13:03.204 "filename": "/dev/nullb0", 00:13:03.204 "name": "null0" 00:13:03.204 }, 00:13:03.204 "method": "bdev_xnvme_create" 00:13:03.204 }, 00:13:03.204 { 00:13:03.204 "method": "bdev_wait_for_examine" 00:13:03.204 } 00:13:03.204 ] 00:13:03.204 } 00:13:03.204 ] 00:13:03.204 } 00:13:03.204 [2024-09-29 21:44:22.161151] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:03.204 [2024-09-29 21:44:22.161262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69431 ] 00:13:03.464 [2024-09-29 21:44:22.308800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.722 [2024-09-29 21:44:22.465909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.722 Running I/O for 5 seconds... 00:13:08.846 232192.00 IOPS, 907.00 MiB/s 232192.00 IOPS, 907.00 MiB/s 232106.67 IOPS, 906.67 MiB/s 232080.00 IOPS, 906.56 MiB/s 00:13:08.846 Latency(us) 00:13:08.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.846 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:08.846 null0 : 5.00 231991.55 906.22 0.00 0.00 273.89 160.69 1518.67 00:13:08.846 =================================================================================================================== 00:13:08.846 Total : 231991.55 906.22 0.00 0.00 273.89 160.69 1518.67 00:13:09.414 21:44:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:09.414 21:44:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:09.675 00:13:09.675 real 0m12.649s 00:13:09.675 user 0m10.216s 00:13:09.675 sys 0m2.184s 00:13:09.675 21:44:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.675 ************************************ 00:13:09.675 END TEST xnvme_bdevperf 00:13:09.675 ************************************ 00:13:09.675 21:44:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 ************************************ 00:13:09.675 00:13:09.675 real 0m40.966s 00:13:09.675 user 0m34.736s 00:13:09.675 sys 0m5.415s 00:13:09.675 21:44:28 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.675 21:44:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 END TEST nvme_xnvme 00:13:09.675 ************************************ 00:13:09.675 21:44:28 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:09.675 21:44:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.675 21:44:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.675 21:44:28 -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 ************************************ 00:13:09.675 START TEST blockdev_xnvme 00:13:09.675 ************************************ 00:13:09.675 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:09.675 * Looking for test storage... 00:13:09.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:09.675 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:09.675 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:09.675 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:09.675 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.675 21:44:28 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:09.937 21:44:28 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:09.937 21:44:28 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.937 21:44:28 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:09.937 21:44:28 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.937 21:44:28 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.937 21:44:28 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.937 21:44:28 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:09.937 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.937 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:09.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.937 --rc genhtml_branch_coverage=1 00:13:09.937 --rc genhtml_function_coverage=1 00:13:09.937 --rc genhtml_legend=1 00:13:09.937 --rc geninfo_all_blocks=1 00:13:09.937 --rc geninfo_unexecuted_blocks=1 00:13:09.937 00:13:09.937 ' 00:13:09.937 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:09.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.937 --rc genhtml_branch_coverage=1 00:13:09.937 --rc genhtml_function_coverage=1 00:13:09.937 --rc genhtml_legend=1 00:13:09.937 --rc geninfo_all_blocks=1 00:13:09.937 --rc geninfo_unexecuted_blocks=1 00:13:09.937 00:13:09.937 ' 00:13:09.937 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:09.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.937 --rc genhtml_branch_coverage=1 00:13:09.937 --rc genhtml_function_coverage=1 00:13:09.937 --rc genhtml_legend=1 00:13:09.937 --rc geninfo_all_blocks=1 00:13:09.937 --rc geninfo_unexecuted_blocks=1 00:13:09.937 00:13:09.937 ' 00:13:09.937 21:44:28 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:09.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.937 --rc genhtml_branch_coverage=1 00:13:09.937 --rc genhtml_function_coverage=1 00:13:09.937 --rc genhtml_legend=1 00:13:09.937 --rc geninfo_all_blocks=1 00:13:09.937 --rc geninfo_unexecuted_blocks=1 00:13:09.937 00:13:09.937 ' 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:09.937 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:09.938 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:09.938 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:09.938 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:09.938 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:09.938 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:09.938 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69573 00:13:09.938 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:09.938 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69573 00:13:09.938 21:44:28 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69573 ']' 00:13:09.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.938 21:44:28 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.938 21:44:28 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.938 21:44:28 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.938 21:44:28 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.938 21:44:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:09.938 21:44:28 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:09.938 [2024-09-29 21:44:28.763330] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:09.938 [2024-09-29 21:44:28.763499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69573 ] 00:13:10.203 [2024-09-29 21:44:28.920032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.203 [2024-09-29 21:44:29.184377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.150 21:44:29 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:11.150 21:44:29 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:11.150 21:44:29 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:11.150 21:44:29 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:11.150 21:44:29 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:11.150 21:44:29 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:11.150 21:44:29 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:11.411 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:11.671 Waiting for block devices as requested 00:13:11.671 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.671 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.671 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.928 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:17.194 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:17.194 nvme0n1 00:13:17.194 nvme1n1 00:13:17.194 nvme2n1 00:13:17.194 nvme2n2 00:13:17.194 nvme2n3 00:13:17.194 nvme3n1 00:13:17.194 21:44:35 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.194 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "b8482adc-1d8f-455d-ad6d-93ad6b4f480d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b8482adc-1d8f-455d-ad6d-93ad6b4f480d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1bbc3ae5-abea-4d22-bc4d-f31daba928f1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1bbc3ae5-abea-4d22-bc4d-f31daba928f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "272106f3-bda9-430a-8373-f1bfad374c0f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "272106f3-bda9-430a-8373-f1bfad374c0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9c93fcb3-9e6f-4dfa-adcf-9df95112fd27"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c93fcb3-9e6f-4dfa-adcf-9df95112fd27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "352c6f18-3f93-475a-b793-0abb1dcd995a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "352c6f18-3f93-475a-b793-0abb1dcd995a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "426c23d2-4d20-476a-86d9-f77341694398"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "426c23d2-4d20-476a-86d9-f77341694398",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:17.195 21:44:35 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69573 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69573 ']' 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69573 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.195 21:44:35 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69573 00:13:17.195 21:44:36 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.195 21:44:36 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.195 killing process with pid 69573 00:13:17.195 21:44:36 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69573' 00:13:17.195 21:44:36 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69573 00:13:17.195 21:44:36 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69573 00:13:18.572 21:44:37 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:18.572 21:44:37 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:18.572 21:44:37 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:18.572 21:44:37 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.572 21:44:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:18.572 ************************************ 00:13:18.572 START TEST bdev_hello_world 00:13:18.572 ************************************ 00:13:18.572 21:44:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:18.572 [2024-09-29 21:44:37.435960] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:18.572 [2024-09-29 21:44:37.436080] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69937 ] 00:13:18.833 [2024-09-29 21:44:37.584990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.833 [2024-09-29 21:44:37.755515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.132 [2024-09-29 21:44:38.065116] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:19.132 [2024-09-29 21:44:38.065162] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:19.132 [2024-09-29 21:44:38.065175] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:19.132 [2024-09-29 21:44:38.066779] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:19.132 [2024-09-29 21:44:38.067302] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:19.132 [2024-09-29 21:44:38.067320] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:19.132 [2024-09-29 21:44:38.068351] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:19.132 00:13:19.132 [2024-09-29 21:44:38.068402] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:20.090 00:13:20.090 real 0m1.378s 00:13:20.090 user 0m1.061s 00:13:20.090 sys 0m0.198s 00:13:20.090 21:44:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.090 ************************************ 00:13:20.090 END TEST bdev_hello_world 00:13:20.090 ************************************ 00:13:20.090 21:44:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 21:44:38 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:20.090 21:44:38 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:20.090 21:44:38 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.090 21:44:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 ************************************ 00:13:20.090 START TEST bdev_bounds 00:13:20.090 ************************************ 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69972 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:20.090 Process bdevio pid: 69972 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69972' 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69972 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 69972 ']' 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.090 21:44:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:20.090 [2024-09-29 21:44:38.873322] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:20.090 [2024-09-29 21:44:38.873436] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69972 ] 00:13:20.090 [2024-09-29 21:44:39.017832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.347 [2024-09-29 21:44:39.192820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.347 [2024-09-29 21:44:39.193321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.347 [2024-09-29 21:44:39.193321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.912 21:44:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.912 21:44:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:20.912 21:44:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:20.912 I/O targets: 00:13:20.912 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:20.912 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:20.912 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:20.912 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:20.912 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:20.912 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:20.912 00:13:20.912 00:13:20.912 CUnit - A unit testing framework for C - Version 2.1-3 00:13:20.912 http://cunit.sourceforge.net/ 00:13:20.912 00:13:20.912 00:13:20.912 Suite: bdevio tests on: nvme3n1 00:13:20.912 Test: blockdev write read block ...passed 00:13:20.912 Test: blockdev write zeroes read block ...passed 00:13:20.912 Test: blockdev write zeroes read no split ...passed 00:13:20.912 Test: blockdev write zeroes read split ...passed 00:13:20.912 Test: blockdev write zeroes read split partial ...passed 00:13:20.912 Test: blockdev reset ...passed 00:13:20.912 Test: blockdev write read 8 blocks ...passed 00:13:20.912 Test: blockdev write read size > 128k ...passed 00:13:20.912 Test: blockdev write read invalid size ...passed 00:13:20.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:20.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:20.912 Test: blockdev write read max offset ...passed 00:13:20.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:20.912 Test: blockdev writev readv 8 blocks ...passed 00:13:20.912 Test: blockdev writev readv 30 x 1block ...passed 00:13:20.912 Test: blockdev writev readv block ...passed 00:13:20.912 Test: blockdev writev readv size > 128k ...passed 00:13:20.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:20.912 Test: blockdev comparev and writev ...passed 00:13:20.912 Test: blockdev nvme passthru rw ...passed 00:13:20.912 Test: blockdev nvme passthru vendor specific ...passed 00:13:20.912 Test: blockdev nvme admin passthru ...passed 00:13:20.912 Test: blockdev copy ...passed 00:13:20.912 Suite: bdevio tests on: nvme2n3 00:13:20.912 Test: blockdev write read block ...passed 00:13:20.912 Test: blockdev write zeroes read block ...passed 00:13:20.912 Test: blockdev write zeroes read no split ...passed 00:13:21.170 Test: blockdev write zeroes read split ...passed 00:13:21.170 Test: blockdev write zeroes read split partial ...passed 00:13:21.170 Test: blockdev reset ...passed 00:13:21.170 Test: blockdev write read 8 blocks ...passed 00:13:21.170 Test: blockdev write read size > 128k ...passed 00:13:21.170 Test: blockdev write read invalid size ...passed 00:13:21.170 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.170 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.170 Test: blockdev write read max offset ...passed 00:13:21.170 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.170 Test: blockdev writev readv 8 blocks ...passed 00:13:21.170 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.170 Test: blockdev writev readv block ...passed 00:13:21.170 Test: blockdev writev readv size > 128k ...passed 00:13:21.170 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.170 Test: blockdev comparev and writev ...passed 00:13:21.170 Test: blockdev nvme passthru rw ...passed 00:13:21.170 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.170 Test: blockdev nvme admin passthru ...passed 00:13:21.170 Test: blockdev copy ...passed 00:13:21.170 Suite: bdevio tests on: nvme2n2 00:13:21.170 Test: blockdev write read block ...passed 00:13:21.170 Test: blockdev write zeroes read block ...passed 00:13:21.170 Test: blockdev write zeroes read no split ...passed 00:13:21.170 Test: blockdev write zeroes read split ...passed 00:13:21.170 Test: blockdev write zeroes read split partial ...passed 00:13:21.170 Test: blockdev reset ...passed 00:13:21.170 Test: blockdev write read 8 blocks ...passed 00:13:21.170 Test: blockdev write read size > 128k ...passed 00:13:21.170 Test: blockdev write read invalid size ...passed 00:13:21.170 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.170 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.170 Test: blockdev write read max offset ...passed 00:13:21.170 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.170 Test: blockdev writev readv 8 blocks ...passed 00:13:21.170 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.170 Test: blockdev writev readv block ...passed 00:13:21.170 Test: blockdev writev readv size > 128k ...passed 00:13:21.170 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.170 Test: blockdev comparev and writev ...passed 00:13:21.170 Test: blockdev nvme passthru rw ...passed 00:13:21.170 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.170 Test: blockdev nvme admin passthru ...passed 00:13:21.170 Test: blockdev copy ...passed 00:13:21.170 Suite: bdevio tests on: nvme2n1 00:13:21.170 Test: blockdev write read block ...passed 00:13:21.170 Test: blockdev write zeroes read block ...passed 00:13:21.170 Test: blockdev write zeroes read no split ...passed 00:13:21.170 Test: blockdev write zeroes read split ...passed 00:13:21.170 Test: blockdev write zeroes read split partial ...passed 00:13:21.170 Test: blockdev reset ...passed 00:13:21.170 Test: blockdev write read 8 blocks ...passed 00:13:21.170 Test: blockdev write read size > 128k ...passed 00:13:21.170 Test: blockdev write read invalid size ...passed 00:13:21.170 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.170 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.170 Test: blockdev write read max offset ...passed 00:13:21.170 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.170 Test: blockdev writev readv 8 blocks ...passed 00:13:21.170 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.170 Test: blockdev writev readv block ...passed 00:13:21.170 Test: blockdev writev readv size > 128k ...passed 00:13:21.170 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.170 Test: blockdev comparev and writev ...passed 00:13:21.170 Test: blockdev nvme passthru rw ...passed 00:13:21.170 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.170 Test: blockdev nvme admin passthru ...passed 00:13:21.170 Test: blockdev copy ...passed 00:13:21.170 Suite: bdevio tests on: nvme1n1 00:13:21.170 Test: blockdev write read block ...passed 00:13:21.170 Test: blockdev write zeroes read block ...passed 00:13:21.170 Test: blockdev write zeroes read no split ...passed 00:13:21.170 Test: blockdev write zeroes read split ...passed 00:13:21.429 Test: blockdev write zeroes read split partial ...passed 00:13:21.429 Test: blockdev reset ...passed 00:13:21.429 Test: blockdev write read 8 blocks ...passed 00:13:21.429 Test: blockdev write read size > 128k ...passed 00:13:21.429 Test: blockdev write read invalid size ...passed 00:13:21.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.429 Test: blockdev write read max offset ...passed 00:13:21.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.429 Test: blockdev writev readv 8 blocks ...passed 00:13:21.429 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.429 Test: blockdev writev readv block ...passed 00:13:21.429 Test: blockdev writev readv size > 128k ...passed 00:13:21.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.429 Test: blockdev comparev and writev ...passed 00:13:21.429 Test: blockdev nvme passthru rw ...passed 00:13:21.429 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.429 Test: blockdev nvme admin passthru ...passed 00:13:21.429 Test: blockdev copy ...passed 00:13:21.429 Suite: bdevio tests on: nvme0n1 00:13:21.429 Test: blockdev write read block ...passed 00:13:21.429 Test: blockdev write zeroes read block ...passed 00:13:21.429 Test: blockdev write zeroes read no split ...passed 00:13:21.429 Test: blockdev write zeroes read split ...passed 00:13:21.429 Test: blockdev write zeroes read split partial ...passed 00:13:21.429 Test: blockdev reset ...passed 00:13:21.429 Test: blockdev write read 8 blocks ...passed 00:13:21.429 Test: blockdev write read size > 128k ...passed 00:13:21.429 Test: blockdev write read invalid size ...passed 00:13:21.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.429 Test: blockdev write read max offset ...passed 00:13:21.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.429 Test: blockdev writev readv 8 blocks ...passed 00:13:21.429 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.429 Test: blockdev writev readv block ...passed 00:13:21.429 Test: blockdev writev readv size > 128k ...passed 00:13:21.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.429 Test: blockdev comparev and writev ...passed 00:13:21.429 Test: blockdev nvme passthru rw ...passed 00:13:21.429 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.429 Test: blockdev nvme admin passthru ...passed 00:13:21.429 Test: blockdev copy ...passed 00:13:21.429 00:13:21.429 Run Summary: Type Total Ran Passed Failed Inactive 00:13:21.429 suites 6 6 n/a 0 0 00:13:21.429 tests 138 138 138 0 0 00:13:21.429 asserts 780 780 780 0 n/a 00:13:21.429 00:13:21.429 Elapsed time = 1.127 seconds 00:13:21.429 0 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69972 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 69972 ']' 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 69972 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69972 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.429 killing process with pid 69972 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69972' 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 69972 00:13:21.429 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 69972 00:13:22.363 21:44:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:22.363 00:13:22.363 real 0m2.164s 00:13:22.363 user 0m5.070s 00:13:22.363 sys 0m0.323s 00:13:22.363 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.363 21:44:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:22.363 ************************************ 00:13:22.363 END TEST bdev_bounds 00:13:22.363 ************************************ 00:13:22.363 21:44:41 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:22.363 21:44:41 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:22.363 21:44:41 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.363 21:44:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.363 ************************************ 00:13:22.363 START TEST bdev_nbd 00:13:22.363 ************************************ 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=70035 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 70035 /var/tmp/spdk-nbd.sock 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 70035 ']' 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.363 21:44:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:22.363 [2024-09-29 21:44:41.111124] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:22.363 [2024-09-29 21:44:41.111672] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.363 [2024-09-29 21:44:41.261027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.621 [2024-09-29 21:44:41.422110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.192 21:44:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.453 1+0 records in 00:13:23.453 1+0 records out 00:13:23.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118689 s, 3.5 MB/s 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.453 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:23.454 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.454 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.454 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.712 1+0 records in 00:13:23.712 1+0 records out 00:13:23.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000913467 s, 4.5 MB/s 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.712 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.973 1+0 records in 00:13:23.973 1+0 records out 00:13:23.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00151246 s, 2.7 MB/s 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.973 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.234 1+0 records in 00:13:24.234 1+0 records out 00:13:24.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109062 s, 3.8 MB/s 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.234 21:44:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.234 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.234 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.234 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.234 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.234 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.495 1+0 records in 00:13:24.495 1+0 records out 00:13:24.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115596 s, 3.5 MB/s 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.495 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.756 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.757 1+0 records in 00:13:24.757 1+0 records out 00:13:24.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000960743 s, 4.3 MB/s 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.757 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd0", 00:13:25.016 "bdev_name": "nvme0n1" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd1", 00:13:25.016 "bdev_name": "nvme1n1" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd2", 00:13:25.016 "bdev_name": "nvme2n1" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd3", 00:13:25.016 "bdev_name": "nvme2n2" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd4", 00:13:25.016 "bdev_name": "nvme2n3" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd5", 00:13:25.016 "bdev_name": "nvme3n1" 00:13:25.016 } 00:13:25.016 ]' 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd0", 00:13:25.016 "bdev_name": "nvme0n1" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd1", 00:13:25.016 "bdev_name": "nvme1n1" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd2", 00:13:25.016 "bdev_name": "nvme2n1" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd3", 00:13:25.016 "bdev_name": "nvme2n2" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd4", 00:13:25.016 "bdev_name": "nvme2n3" 00:13:25.016 }, 00:13:25.016 { 00:13:25.016 "nbd_device": "/dev/nbd5", 00:13:25.016 "bdev_name": "nvme3n1" 00:13:25.016 } 00:13:25.016 ]' 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.016 21:44:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.275 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.533 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.792 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.050 21:44:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:26.308 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:26.566 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:26.566 /dev/nbd0 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.824 1+0 records in 00:13:26.824 1+0 records out 00:13:26.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000964835 s, 4.2 MB/s 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:26.824 /dev/nbd1 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.824 1+0 records in 00:13:26.824 1+0 records out 00:13:26.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101989 s, 4.0 MB/s 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:26.824 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.082 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.082 21:44:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.082 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.082 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.082 21:44:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:27.082 /dev/nbd10 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.082 1+0 records in 00:13:27.082 1+0 records out 00:13:27.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012368 s, 3.3 MB/s 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.082 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:27.343 /dev/nbd11 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.343 1+0 records in 00:13:27.343 1+0 records out 00:13:27.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096936 s, 4.2 MB/s 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.343 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:27.604 /dev/nbd12 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.604 1+0 records in 00:13:27.604 1+0 records out 00:13:27.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848473 s, 4.8 MB/s 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.604 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:27.865 /dev/nbd13 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.865 1+0 records in 00:13:27.865 1+0 records out 00:13:27.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100572 s, 4.1 MB/s 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.865 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.866 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:27.866 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.866 21:44:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd0", 00:13:28.127 "bdev_name": "nvme0n1" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd1", 00:13:28.127 "bdev_name": "nvme1n1" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd10", 00:13:28.127 "bdev_name": "nvme2n1" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd11", 00:13:28.127 "bdev_name": "nvme2n2" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd12", 00:13:28.127 "bdev_name": "nvme2n3" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd13", 00:13:28.127 "bdev_name": "nvme3n1" 00:13:28.127 } 00:13:28.127 ]' 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd0", 00:13:28.127 "bdev_name": "nvme0n1" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd1", 00:13:28.127 "bdev_name": "nvme1n1" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd10", 00:13:28.127 "bdev_name": "nvme2n1" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd11", 00:13:28.127 "bdev_name": "nvme2n2" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd12", 00:13:28.127 "bdev_name": "nvme2n3" 00:13:28.127 }, 00:13:28.127 { 00:13:28.127 "nbd_device": "/dev/nbd13", 00:13:28.127 "bdev_name": "nvme3n1" 00:13:28.127 } 00:13:28.127 ]' 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:28.127 /dev/nbd1 00:13:28.127 /dev/nbd10 00:13:28.127 /dev/nbd11 00:13:28.127 /dev/nbd12 00:13:28.127 /dev/nbd13' 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:28.127 /dev/nbd1 00:13:28.127 /dev/nbd10 00:13:28.127 /dev/nbd11 00:13:28.127 /dev/nbd12 00:13:28.127 /dev/nbd13' 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:28.127 256+0 records in 00:13:28.127 256+0 records out 00:13:28.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00747358 s, 140 MB/s 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.127 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:28.387 256+0 records in 00:13:28.387 256+0 records out 00:13:28.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19256 s, 5.4 MB/s 00:13:28.387 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.387 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:28.648 256+0 records in 00:13:28.648 256+0 records out 00:13:28.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.329848 s, 3.2 MB/s 00:13:28.648 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.648 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:28.909 256+0 records in 00:13:28.909 256+0 records out 00:13:28.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249458 s, 4.2 MB/s 00:13:28.909 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.909 21:44:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:29.168 256+0 records in 00:13:29.168 256+0 records out 00:13:29.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.212677 s, 4.9 MB/s 00:13:29.168 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.168 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:29.427 256+0 records in 00:13:29.427 256+0 records out 00:13:29.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.237532 s, 4.4 MB/s 00:13:29.427 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.427 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:29.685 256+0 records in 00:13:29.685 256+0 records out 00:13:29.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.215835 s, 4.9 MB/s 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.685 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.944 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:30.211 21:44:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.211 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.211 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.211 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.211 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.211 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.211 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.211 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.211 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.211 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.469 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.728 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.990 21:44:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:31.250 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:31.510 malloc_lvol_verify 00:13:31.510 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:31.770 b40be63f-9d21-46b9-8a1c-ab522d22c4e5 00:13:31.770 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:32.030 850537f0-68e7-4c85-a7f8-76b683d0fe9c 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:32.030 /dev/nbd0 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:32.030 mke2fs 1.47.0 (5-Feb-2023) 00:13:32.030 Discarding device blocks: 0/4096 done 00:13:32.030 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:32.030 00:13:32.030 Allocating group tables: 0/1 done 00:13:32.030 Writing inode tables: 0/1 done 00:13:32.030 Creating journal (1024 blocks): done 00:13:32.030 Writing superblocks and filesystem accounting information: 0/1 done 00:13:32.030 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.030 21:44:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 70035 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 70035 ']' 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 70035 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70035 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.290 killing process with pid 70035 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70035' 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 70035 00:13:32.290 21:44:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 70035 00:13:33.233 21:44:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:33.233 00:13:33.233 real 0m11.120s 00:13:33.233 user 0m14.842s 00:13:33.233 sys 0m3.779s 00:13:33.233 21:44:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.233 21:44:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:33.233 ************************************ 00:13:33.233 END TEST bdev_nbd 00:13:33.233 ************************************ 00:13:33.233 21:44:52 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:33.233 21:44:52 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:33.233 21:44:52 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:33.233 21:44:52 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:33.233 21:44:52 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:33.233 21:44:52 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.233 21:44:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.494 ************************************ 00:13:33.494 START TEST bdev_fio 00:13:33.494 ************************************ 00:13:33.494 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:33.494 ************************************ 00:13:33.494 START TEST bdev_fio_rw_verify 00:13:33.494 ************************************ 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.494 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.495 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:33.495 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:33.495 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.495 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.495 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:33.495 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:33.495 21:44:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.756 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.756 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.756 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.756 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.756 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.756 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.756 fio-3.35 00:13:33.756 Starting 6 threads 00:13:46.008 00:13:46.008 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70440: Sun Sep 29 21:45:03 2024 00:13:46.008 read: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(424MiB/10002msec) 00:13:46.008 slat (usec): min=2, max=1701, avg= 7.44, stdev=13.36 00:13:46.008 clat (usec): min=100, max=10240, avg=1883.44, stdev=902.26 00:13:46.008 lat (usec): min=104, max=10252, avg=1890.88, stdev=902.74 00:13:46.008 clat percentiles (usec): 00:13:46.008 | 50.000th=[ 1745], 99.000th=[ 4752], 99.900th=[ 6849], 99.990th=[ 8979], 00:13:46.008 | 99.999th=[10290] 00:13:46.008 write: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(438MiB/10002msec); 0 zone resets 00:13:46.008 slat (usec): min=12, max=5738, avg=45.29, stdev=170.62 00:13:46.008 clat (usec): min=100, max=10688, avg=2100.24, stdev=968.08 00:13:46.008 lat (usec): min=117, max=10718, avg=2145.52, stdev=982.24 00:13:46.008 clat percentiles (usec): 00:13:46.008 | 50.000th=[ 1958], 99.000th=[ 5145], 99.900th=[ 6915], 99.990th=[ 8979], 00:13:46.008 | 99.999th=[10683] 00:13:46.008 bw ( KiB/s): min=36289, max=49368, per=100.00%, avg=45034.26, stdev=804.27, samples=114 00:13:46.008 iops : min= 9072, max=12342, avg=11258.00, stdev=201.04, samples=114 00:13:46.008 lat (usec) : 250=0.27%, 500=1.72%, 750=3.51%, 1000=6.19% 00:13:46.008 lat (msec) : 2=45.14%, 4=39.68%, 10=3.49%, 20=0.01% 00:13:46.008 cpu : usr=48.01%, sys=31.22%, ctx=4176, majf=0, minf=12263 00:13:46.008 IO depths : 1=11.5%, 2=24.0%, 4=51.1%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.008 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.008 issued rwts: total=108544,112127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.008 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:46.008 00:13:46.008 Run status group 0 (all jobs): 00:13:46.008 READ: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=424MiB (445MB), run=10002-10002msec 00:13:46.008 WRITE: bw=43.8MiB/s (45.9MB/s), 43.8MiB/s-43.8MiB/s (45.9MB/s-45.9MB/s), io=438MiB (459MB), run=10002-10002msec 00:13:46.008 ----------------------------------------------------- 00:13:46.008 Suppressions used: 00:13:46.008 count bytes template 00:13:46.008 6 48 /usr/src/fio/parse.c 00:13:46.008 3509 336864 /usr/src/fio/iolog.c 00:13:46.008 1 8 libtcmalloc_minimal.so 00:13:46.008 1 904 libcrypto.so 00:13:46.008 ----------------------------------------------------- 00:13:46.008 00:13:46.008 00:13:46.008 real 0m12.059s 00:13:46.008 user 0m30.365s 00:13:46.008 sys 0m19.102s 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:46.008 ************************************ 00:13:46.008 END TEST bdev_fio_rw_verify 00:13:46.008 ************************************ 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:46.008 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:46.009 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "b8482adc-1d8f-455d-ad6d-93ad6b4f480d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b8482adc-1d8f-455d-ad6d-93ad6b4f480d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1bbc3ae5-abea-4d22-bc4d-f31daba928f1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1bbc3ae5-abea-4d22-bc4d-f31daba928f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "272106f3-bda9-430a-8373-f1bfad374c0f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "272106f3-bda9-430a-8373-f1bfad374c0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9c93fcb3-9e6f-4dfa-adcf-9df95112fd27"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9c93fcb3-9e6f-4dfa-adcf-9df95112fd27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "352c6f18-3f93-475a-b793-0abb1dcd995a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "352c6f18-3f93-475a-b793-0abb1dcd995a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "426c23d2-4d20-476a-86d9-f77341694398"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "426c23d2-4d20-476a-86d9-f77341694398",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:46.009 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:46.009 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.009 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:46.009 /home/vagrant/spdk_repo/spdk 00:13:46.009 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:46.009 21:45:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:46.009 00:13:46.009 real 0m12.237s 00:13:46.009 user 0m30.432s 00:13:46.009 sys 0m19.184s 00:13:46.009 ************************************ 00:13:46.009 END TEST bdev_fio 00:13:46.009 ************************************ 00:13:46.009 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.009 21:45:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:46.009 21:45:04 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:46.009 21:45:04 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:46.009 21:45:04 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:46.009 21:45:04 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.009 21:45:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.009 ************************************ 00:13:46.009 START TEST bdev_verify 00:13:46.009 ************************************ 00:13:46.009 21:45:04 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:46.009 [2024-09-29 21:45:04.605238] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:46.009 [2024-09-29 21:45:04.605378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70612 ] 00:13:46.009 [2024-09-29 21:45:04.759056] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.269 [2024-09-29 21:45:05.034081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.269 [2024-09-29 21:45:05.034216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.530 Running I/O for 5 seconds... 00:13:51.687 22272.00 IOPS, 87.00 MiB/s 22352.00 IOPS, 87.31 MiB/s 23072.00 IOPS, 90.13 MiB/s 23464.00 IOPS, 91.66 MiB/s 23718.40 IOPS, 92.65 MiB/s 00:13:51.687 Latency(us) 00:13:51.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.687 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x0 length 0xa0000 00:13:51.687 nvme0n1 : 5.03 1958.18 7.65 0.00 0.00 65253.32 7360.20 65737.65 00:13:51.687 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0xa0000 length 0xa0000 00:13:51.687 nvme0n1 : 5.04 1702.41 6.65 0.00 0.00 75023.78 9023.80 72997.02 00:13:51.687 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x0 length 0xbd0bd 00:13:51.687 nvme1n1 : 5.06 2642.98 10.32 0.00 0.00 48214.02 6074.68 56461.78 00:13:51.687 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:51.687 nvme1n1 : 5.07 2434.57 9.51 0.00 0.00 52266.74 6503.19 61704.66 00:13:51.687 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x0 length 0x80000 00:13:51.687 nvme2n1 : 5.06 2021.87 7.90 0.00 0.00 62830.87 8418.86 64931.05 00:13:51.687 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x80000 length 0x80000 00:13:51.687 nvme2n1 : 5.06 1745.42 6.82 0.00 0.00 72805.09 8973.39 65737.65 00:13:51.687 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x0 length 0x80000 00:13:51.687 nvme2n2 : 5.07 1945.44 7.60 0.00 0.00 65155.29 10284.11 57268.38 00:13:51.687 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x80000 length 0x80000 00:13:51.687 nvme2n2 : 5.08 1713.50 6.69 0.00 0.00 73926.43 11241.94 64931.05 00:13:51.687 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x0 length 0x80000 00:13:51.687 nvme2n3 : 5.07 1944.78 7.60 0.00 0.00 65070.99 7561.85 59284.87 00:13:51.687 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x80000 length 0x80000 00:13:51.687 nvme2n3 : 5.08 1711.94 6.69 0.00 0.00 73806.83 8570.09 72593.72 00:13:51.687 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x0 length 0x20000 00:13:51.687 nvme3n1 : 5.07 1967.75 7.69 0.00 0.00 64207.82 3806.13 63721.16 00:13:51.687 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:51.687 Verification LBA range: start 0x20000 length 0x20000 00:13:51.687 nvme3n1 : 5.09 1709.79 6.68 0.00 0.00 73812.69 4562.31 72593.72 00:13:51.687 =================================================================================================================== 00:13:51.687 Total : 23498.61 91.79 0.00 0.00 64821.49 3806.13 72997.02 00:13:53.075 00:13:53.075 real 0m7.102s 00:13:53.075 user 0m11.159s 00:13:53.075 sys 0m1.589s 00:13:53.075 21:45:11 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.075 ************************************ 00:13:53.075 END TEST bdev_verify 00:13:53.075 21:45:11 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:53.075 ************************************ 00:13:53.075 21:45:11 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:53.075 21:45:11 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:53.075 21:45:11 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.075 21:45:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.075 ************************************ 00:13:53.075 START TEST bdev_verify_big_io 00:13:53.075 ************************************ 00:13:53.075 21:45:11 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:53.075 [2024-09-29 21:45:11.781112] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:53.075 [2024-09-29 21:45:11.781475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70711 ] 00:13:53.075 [2024-09-29 21:45:11.936250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:53.336 [2024-09-29 21:45:12.200117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.336 [2024-09-29 21:45:12.200125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.909 Running I/O for 5 seconds... 00:14:00.293 1136.00 IOPS, 71.00 MiB/s 2232.00 IOPS, 139.50 MiB/s 2474.67 IOPS, 154.67 MiB/s 00:14:00.293 Latency(us) 00:14:00.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.293 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x0 length 0xa000 00:14:00.293 nvme0n1 : 6.11 98.21 6.14 0.00 0.00 1225084.44 10536.17 1084066.26 00:14:00.293 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0xa000 length 0xa000 00:14:00.293 nvme0n1 : 6.09 84.02 5.25 0.00 0.00 1490109.05 152446.82 1432516.14 00:14:00.293 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x0 length 0xbd0b 00:14:00.293 nvme1n1 : 6.12 125.57 7.85 0.00 0.00 978906.81 11494.01 1219574.55 00:14:00.293 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:00.293 nvme1n1 : 6.24 84.56 5.28 0.00 0.00 1400642.14 7612.26 2181038.08 00:14:00.293 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x0 length 0x8000 00:14:00.293 nvme2n1 : 6.14 104.21 6.51 0.00 0.00 1149925.53 35288.62 993727.41 00:14:00.293 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x8000 length 0x8000 00:14:00.293 nvme2n1 : 6.17 101.06 6.32 0.00 0.00 1128565.63 35288.62 2064888.12 00:14:00.293 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x0 length 0x8000 00:14:00.293 nvme2n2 : 6.12 114.95 7.18 0.00 0.00 1014384.96 6856.07 1374441.16 00:14:00.293 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x8000 length 0x8000 00:14:00.293 nvme2n2 : 6.14 87.26 5.45 0.00 0.00 1253488.11 37506.76 2413337.99 00:14:00.293 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x0 length 0x8000 00:14:00.293 nvme2n3 : 6.12 109.80 6.86 0.00 0.00 1035638.64 20064.10 1884210.41 00:14:00.293 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x8000 length 0x8000 00:14:00.293 nvme2n3 : 6.21 72.20 4.51 0.00 0.00 1457725.89 58478.28 3561932.01 00:14:00.293 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x0 length 0x2000 00:14:00.293 nvme3n1 : 6.13 104.46 6.53 0.00 0.00 1059996.83 13208.02 2516582.40 00:14:00.293 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:00.293 Verification LBA range: start 0x2000 length 0x2000 00:14:00.293 nvme3n1 : 6.25 151.08 9.44 0.00 0.00 667054.64 8418.86 2310093.59 00:14:00.293 =================================================================================================================== 00:14:00.293 Total : 1237.37 77.34 0.00 0.00 1112474.77 6856.07 3561932.01 00:14:01.679 00:14:01.679 real 0m8.556s 00:14:01.679 user 0m15.372s 00:14:01.679 sys 0m0.520s 00:14:01.679 21:45:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.679 ************************************ 00:14:01.679 END TEST bdev_verify_big_io 00:14:01.679 ************************************ 00:14:01.679 21:45:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.679 21:45:20 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:01.679 21:45:20 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:01.679 21:45:20 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.679 21:45:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.679 ************************************ 00:14:01.679 START TEST bdev_write_zeroes 00:14:01.679 ************************************ 00:14:01.679 21:45:20 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:01.679 [2024-09-29 21:45:20.410617] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:01.679 [2024-09-29 21:45:20.410766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70833 ] 00:14:01.679 [2024-09-29 21:45:20.564528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.941 [2024-09-29 21:45:20.829238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.513 Running I/O for 1 seconds... 00:14:03.457 89248.00 IOPS, 348.62 MiB/s 00:14:03.457 Latency(us) 00:14:03.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.457 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:03.457 nvme0n1 : 1.02 14590.66 56.99 0.00 0.00 8763.19 6301.54 19459.15 00:14:03.457 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:03.457 nvme1n1 : 1.01 16051.88 62.70 0.00 0.00 7956.52 4915.20 15829.46 00:14:03.457 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:03.457 nvme2n1 : 1.02 14573.36 56.93 0.00 0.00 8755.27 6251.13 19963.27 00:14:03.457 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:03.457 nvme2n2 : 1.02 14556.71 56.86 0.00 0.00 8707.41 5368.91 20164.92 00:14:03.457 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:03.457 nvme2n3 : 1.02 14539.91 56.80 0.00 0.00 8708.71 5293.29 20366.57 00:14:03.457 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:03.457 nvme3n1 : 1.02 14608.57 57.06 0.00 0.00 8659.68 5242.88 19459.15 00:14:03.457 =================================================================================================================== 00:14:03.457 Total : 88921.09 347.35 0.00 0.00 8581.69 4915.20 20366.57 00:14:04.400 00:14:04.400 real 0m2.982s 00:14:04.400 user 0m2.214s 00:14:04.400 sys 0m0.575s 00:14:04.400 21:45:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.400 ************************************ 00:14:04.400 END TEST bdev_write_zeroes 00:14:04.400 ************************************ 00:14:04.400 21:45:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:04.400 21:45:23 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:04.400 21:45:23 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:04.400 21:45:23 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.400 21:45:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.660 ************************************ 00:14:04.660 START TEST bdev_json_nonenclosed 00:14:04.660 ************************************ 00:14:04.660 21:45:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:04.660 [2024-09-29 21:45:23.469992] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:04.660 [2024-09-29 21:45:23.470173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70892 ] 00:14:04.660 [2024-09-29 21:45:23.624641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.921 [2024-09-29 21:45:23.887886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.921 [2024-09-29 21:45:23.888020] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:04.921 [2024-09-29 21:45:23.888042] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:04.921 [2024-09-29 21:45:23.888054] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:05.491 ************************************ 00:14:05.491 END TEST bdev_json_nonenclosed 00:14:05.491 ************************************ 00:14:05.491 00:14:05.491 real 0m0.850s 00:14:05.491 user 0m0.600s 00:14:05.491 sys 0m0.140s 00:14:05.491 21:45:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.491 21:45:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:05.491 21:45:24 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:05.491 21:45:24 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:05.491 21:45:24 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.491 21:45:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.491 ************************************ 00:14:05.491 START TEST bdev_json_nonarray 00:14:05.491 ************************************ 00:14:05.491 21:45:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:05.491 [2024-09-29 21:45:24.382355] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:05.491 [2024-09-29 21:45:24.382540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70923 ] 00:14:05.758 [2024-09-29 21:45:24.536657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.052 [2024-09-29 21:45:24.802819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.052 [2024-09-29 21:45:24.803262] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:06.052 [2024-09-29 21:45:24.803298] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:06.052 [2024-09-29 21:45:24.803311] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:06.318 ************************************ 00:14:06.318 END TEST bdev_json_nonarray 00:14:06.318 ************************************ 00:14:06.318 00:14:06.318 real 0m0.848s 00:14:06.318 user 0m0.588s 00:14:06.318 sys 0m0.148s 00:14:06.318 21:45:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.318 21:45:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:06.318 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:06.318 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:06.318 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:06.318 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:06.318 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:06.318 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:06.318 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:06.319 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:06.319 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:06.319 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:06.319 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:06.319 21:45:25 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:06.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:10.186 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:10.186 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:10.186 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:10.186 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:10.186 00:14:10.186 real 1m0.203s 00:14:10.186 user 1m30.810s 00:14:10.186 sys 0m32.854s 00:14:10.186 21:45:28 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.186 21:45:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.186 ************************************ 00:14:10.186 END TEST blockdev_xnvme 00:14:10.186 ************************************ 00:14:10.186 21:45:28 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:10.186 21:45:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:10.186 21:45:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.186 21:45:28 -- common/autotest_common.sh@10 -- # set +x 00:14:10.186 ************************************ 00:14:10.186 START TEST ublk 00:14:10.186 ************************************ 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:10.186 * Looking for test storage... 00:14:10.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:10.186 21:45:28 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.186 21:45:28 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.186 21:45:28 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.186 21:45:28 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.186 21:45:28 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.186 21:45:28 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.186 21:45:28 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.186 21:45:28 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.186 21:45:28 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.186 21:45:28 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.186 21:45:28 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.186 21:45:28 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:10.186 21:45:28 ublk -- scripts/common.sh@345 -- # : 1 00:14:10.186 21:45:28 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.186 21:45:28 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.186 21:45:28 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:10.186 21:45:28 ublk -- scripts/common.sh@353 -- # local d=1 00:14:10.186 21:45:28 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.186 21:45:28 ublk -- scripts/common.sh@355 -- # echo 1 00:14:10.186 21:45:28 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.186 21:45:28 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:10.186 21:45:28 ublk -- scripts/common.sh@353 -- # local d=2 00:14:10.186 21:45:28 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.186 21:45:28 ublk -- scripts/common.sh@355 -- # echo 2 00:14:10.186 21:45:28 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.186 21:45:28 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.186 21:45:28 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.186 21:45:28 ublk -- scripts/common.sh@368 -- # return 0 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.186 --rc genhtml_branch_coverage=1 00:14:10.186 --rc genhtml_function_coverage=1 00:14:10.186 --rc genhtml_legend=1 00:14:10.186 --rc geninfo_all_blocks=1 00:14:10.186 --rc geninfo_unexecuted_blocks=1 00:14:10.186 00:14:10.186 ' 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.186 --rc genhtml_branch_coverage=1 00:14:10.186 --rc genhtml_function_coverage=1 00:14:10.186 --rc genhtml_legend=1 00:14:10.186 --rc geninfo_all_blocks=1 00:14:10.186 --rc geninfo_unexecuted_blocks=1 00:14:10.186 00:14:10.186 ' 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.186 --rc genhtml_branch_coverage=1 00:14:10.186 --rc genhtml_function_coverage=1 00:14:10.186 --rc genhtml_legend=1 00:14:10.186 --rc geninfo_all_blocks=1 00:14:10.186 --rc geninfo_unexecuted_blocks=1 00:14:10.186 00:14:10.186 ' 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:10.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.186 --rc genhtml_branch_coverage=1 00:14:10.186 --rc genhtml_function_coverage=1 00:14:10.186 --rc genhtml_legend=1 00:14:10.186 --rc geninfo_all_blocks=1 00:14:10.186 --rc geninfo_unexecuted_blocks=1 00:14:10.186 00:14:10.186 ' 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:10.186 21:45:28 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:10.186 21:45:28 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:10.186 21:45:28 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:10.186 21:45:28 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:10.186 21:45:28 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:10.186 21:45:28 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:10.186 21:45:28 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:10.186 21:45:28 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:10.186 21:45:28 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.186 21:45:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.186 ************************************ 00:14:10.186 START TEST test_save_ublk_config 00:14:10.186 ************************************ 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71233 00:14:10.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71233 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71233 ']' 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.186 21:45:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:10.186 [2024-09-29 21:45:29.063376] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:10.186 [2024-09-29 21:45:29.063787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71233 ] 00:14:10.446 [2024-09-29 21:45:29.221115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.707 [2024-09-29 21:45:29.501845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.650 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.650 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:11.650 21:45:30 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:11.650 21:45:30 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:11.650 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.650 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:11.650 [2024-09-29 21:45:30.321425] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:11.651 [2024-09-29 21:45:30.322453] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:11.651 malloc0 00:14:11.651 [2024-09-29 21:45:30.401548] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:11.651 [2024-09-29 21:45:30.401642] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:11.651 [2024-09-29 21:45:30.401653] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:11.651 [2024-09-29 21:45:30.401664] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.651 [2024-09-29 21:45:30.410540] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.651 [2024-09-29 21:45:30.410569] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.651 [2024-09-29 21:45:30.417423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.651 [2024-09-29 21:45:30.417543] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:11.651 [2024-09-29 21:45:30.434427] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.651 0 00:14:11.651 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.651 21:45:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:11.651 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.651 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:11.912 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.912 21:45:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:11.912 "subsystems": [ 00:14:11.912 { 00:14:11.912 "subsystem": "fsdev", 00:14:11.912 "config": [ 00:14:11.912 { 00:14:11.912 "method": "fsdev_set_opts", 00:14:11.912 "params": { 00:14:11.912 "fsdev_io_pool_size": 65535, 00:14:11.912 "fsdev_io_cache_size": 256 00:14:11.912 } 00:14:11.912 } 00:14:11.912 ] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "keyring", 00:14:11.912 "config": [] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "iobuf", 00:14:11.912 "config": [ 00:14:11.912 { 00:14:11.912 "method": "iobuf_set_options", 00:14:11.912 "params": { 00:14:11.912 "small_pool_count": 8192, 00:14:11.912 "large_pool_count": 1024, 00:14:11.912 "small_bufsize": 8192, 00:14:11.912 "large_bufsize": 135168 00:14:11.912 } 00:14:11.912 } 00:14:11.912 ] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "sock", 00:14:11.912 "config": [ 00:14:11.912 { 00:14:11.912 "method": "sock_set_default_impl", 00:14:11.912 "params": { 00:14:11.912 "impl_name": "posix" 00:14:11.912 } 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "method": "sock_impl_set_options", 00:14:11.912 "params": { 00:14:11.912 "impl_name": "ssl", 00:14:11.912 "recv_buf_size": 4096, 00:14:11.912 "send_buf_size": 4096, 00:14:11.912 "enable_recv_pipe": true, 00:14:11.912 "enable_quickack": false, 00:14:11.912 "enable_placement_id": 0, 00:14:11.912 "enable_zerocopy_send_server": true, 00:14:11.912 "enable_zerocopy_send_client": false, 00:14:11.912 "zerocopy_threshold": 0, 00:14:11.912 "tls_version": 0, 00:14:11.912 "enable_ktls": false 00:14:11.912 } 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "method": "sock_impl_set_options", 00:14:11.912 "params": { 00:14:11.912 "impl_name": "posix", 00:14:11.912 "recv_buf_size": 2097152, 00:14:11.912 "send_buf_size": 2097152, 00:14:11.912 "enable_recv_pipe": true, 00:14:11.912 "enable_quickack": false, 00:14:11.912 "enable_placement_id": 0, 00:14:11.912 "enable_zerocopy_send_server": true, 00:14:11.912 "enable_zerocopy_send_client": false, 00:14:11.912 "zerocopy_threshold": 0, 00:14:11.912 "tls_version": 0, 00:14:11.912 "enable_ktls": false 00:14:11.912 } 00:14:11.912 } 00:14:11.912 ] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "vmd", 00:14:11.912 "config": [] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "accel", 00:14:11.912 "config": [ 00:14:11.912 { 00:14:11.912 "method": "accel_set_options", 00:14:11.912 "params": { 00:14:11.912 "small_cache_size": 128, 00:14:11.912 "large_cache_size": 16, 00:14:11.912 "task_count": 2048, 00:14:11.912 "sequence_count": 2048, 00:14:11.912 "buf_count": 2048 00:14:11.912 } 00:14:11.912 } 00:14:11.912 ] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "bdev", 00:14:11.912 "config": [ 00:14:11.912 { 00:14:11.912 "method": "bdev_set_options", 00:14:11.912 "params": { 00:14:11.912 "bdev_io_pool_size": 65535, 00:14:11.912 "bdev_io_cache_size": 256, 00:14:11.912 "bdev_auto_examine": true, 00:14:11.912 "iobuf_small_cache_size": 128, 00:14:11.912 "iobuf_large_cache_size": 16 00:14:11.912 } 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "method": "bdev_raid_set_options", 00:14:11.912 "params": { 00:14:11.912 "process_window_size_kb": 1024, 00:14:11.912 "process_max_bandwidth_mb_sec": 0 00:14:11.912 } 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "method": "bdev_iscsi_set_options", 00:14:11.912 "params": { 00:14:11.912 "timeout_sec": 30 00:14:11.912 } 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "method": "bdev_nvme_set_options", 00:14:11.912 "params": { 00:14:11.912 "action_on_timeout": "none", 00:14:11.912 "timeout_us": 0, 00:14:11.912 "timeout_admin_us": 0, 00:14:11.912 "keep_alive_timeout_ms": 10000, 00:14:11.912 "arbitration_burst": 0, 00:14:11.912 "low_priority_weight": 0, 00:14:11.912 "medium_priority_weight": 0, 00:14:11.912 "high_priority_weight": 0, 00:14:11.912 "nvme_adminq_poll_period_us": 10000, 00:14:11.912 "nvme_ioq_poll_period_us": 0, 00:14:11.912 "io_queue_requests": 0, 00:14:11.912 "delay_cmd_submit": true, 00:14:11.912 "transport_retry_count": 4, 00:14:11.912 "bdev_retry_count": 3, 00:14:11.912 "transport_ack_timeout": 0, 00:14:11.912 "ctrlr_loss_timeout_sec": 0, 00:14:11.912 "reconnect_delay_sec": 0, 00:14:11.912 "fast_io_fail_timeout_sec": 0, 00:14:11.912 "disable_auto_failback": false, 00:14:11.912 "generate_uuids": false, 00:14:11.912 "transport_tos": 0, 00:14:11.912 "nvme_error_stat": false, 00:14:11.912 "rdma_srq_size": 0, 00:14:11.912 "io_path_stat": false, 00:14:11.912 "allow_accel_sequence": false, 00:14:11.912 "rdma_max_cq_size": 0, 00:14:11.912 "rdma_cm_event_timeout_ms": 0, 00:14:11.912 "dhchap_digests": [ 00:14:11.912 "sha256", 00:14:11.912 "sha384", 00:14:11.912 "sha512" 00:14:11.912 ], 00:14:11.912 "dhchap_dhgroups": [ 00:14:11.912 "null", 00:14:11.912 "ffdhe2048", 00:14:11.912 "ffdhe3072", 00:14:11.912 "ffdhe4096", 00:14:11.912 "ffdhe6144", 00:14:11.912 "ffdhe8192" 00:14:11.912 ] 00:14:11.912 } 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "method": "bdev_nvme_set_hotplug", 00:14:11.912 "params": { 00:14:11.912 "period_us": 100000, 00:14:11.912 "enable": false 00:14:11.912 } 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "method": "bdev_malloc_create", 00:14:11.912 "params": { 00:14:11.912 "name": "malloc0", 00:14:11.912 "num_blocks": 8192, 00:14:11.912 "block_size": 4096, 00:14:11.912 "physical_block_size": 4096, 00:14:11.912 "uuid": "7e51f4ea-eb85-4021-92b4-57f1aa06b22d", 00:14:11.912 "optimal_io_boundary": 0, 00:14:11.912 "md_size": 0, 00:14:11.912 "dif_type": 0, 00:14:11.912 "dif_is_head_of_md": false, 00:14:11.912 "dif_pi_format": 0 00:14:11.912 } 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "method": "bdev_wait_for_examine" 00:14:11.912 } 00:14:11.912 ] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "scsi", 00:14:11.912 "config": null 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "scheduler", 00:14:11.912 "config": [ 00:14:11.912 { 00:14:11.912 "method": "framework_set_scheduler", 00:14:11.912 "params": { 00:14:11.912 "name": "static" 00:14:11.912 } 00:14:11.912 } 00:14:11.912 ] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "vhost_scsi", 00:14:11.912 "config": [] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "vhost_blk", 00:14:11.912 "config": [] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "ublk", 00:14:11.912 "config": [ 00:14:11.912 { 00:14:11.912 "method": "ublk_create_target", 00:14:11.912 "params": { 00:14:11.912 "cpumask": "1" 00:14:11.912 } 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "method": "ublk_start_disk", 00:14:11.912 "params": { 00:14:11.912 "bdev_name": "malloc0", 00:14:11.912 "ublk_id": 0, 00:14:11.912 "num_queues": 1, 00:14:11.912 "queue_depth": 128 00:14:11.912 } 00:14:11.912 } 00:14:11.912 ] 00:14:11.912 }, 00:14:11.912 { 00:14:11.912 "subsystem": "nbd", 00:14:11.912 "config": [] 00:14:11.913 }, 00:14:11.913 { 00:14:11.913 "subsystem": "nvmf", 00:14:11.913 "config": [ 00:14:11.913 { 00:14:11.913 "method": "nvmf_set_config", 00:14:11.913 "params": { 00:14:11.913 "discovery_filter": "match_any", 00:14:11.913 "admin_cmd_passthru": { 00:14:11.913 "identify_ctrlr": false 00:14:11.913 }, 00:14:11.913 "dhchap_digests": [ 00:14:11.913 "sha256", 00:14:11.913 "sha384", 00:14:11.913 "sha512" 00:14:11.913 ], 00:14:11.913 "dhchap_dhgroups": [ 00:14:11.913 "null", 00:14:11.913 "ffdhe2048", 00:14:11.913 "ffdhe3072", 00:14:11.913 "ffdhe4096", 00:14:11.913 "ffdhe6144", 00:14:11.913 "ffdhe8192" 00:14:11.913 ] 00:14:11.913 } 00:14:11.913 }, 00:14:11.913 { 00:14:11.913 "method": "nvmf_set_max_subsystems", 00:14:11.913 "params": { 00:14:11.913 "max_subsystems": 1024 00:14:11.913 } 00:14:11.913 }, 00:14:11.913 { 00:14:11.913 "method": "nvmf_set_crdt", 00:14:11.913 "params": { 00:14:11.913 "crdt1": 0, 00:14:11.913 "crdt2": 0, 00:14:11.913 "crdt3": 0 00:14:11.913 } 00:14:11.913 } 00:14:11.913 ] 00:14:11.913 }, 00:14:11.913 { 00:14:11.913 "subsystem": "iscsi", 00:14:11.913 "config": [ 00:14:11.913 { 00:14:11.913 "method": "iscsi_set_options", 00:14:11.913 "params": { 00:14:11.913 "node_base": "iqn.2016-06.io.spdk", 00:14:11.913 "max_sessions": 128, 00:14:11.913 "max_connections_per_session": 2, 00:14:11.913 "max_queue_depth": 64, 00:14:11.913 "default_time2wait": 2, 00:14:11.913 "default_time2retain": 20, 00:14:11.913 "first_burst_length": 8192, 00:14:11.913 "immediate_data": true, 00:14:11.913 "allow_duplicated_isid": false, 00:14:11.913 "error_recovery_level": 0, 00:14:11.913 "nop_timeout": 60, 00:14:11.913 "nop_in_interval": 30, 00:14:11.913 "disable_chap": false, 00:14:11.913 "require_chap": false, 00:14:11.913 "mutual_chap": false, 00:14:11.913 "chap_group": 0, 00:14:11.913 "max_large_datain_per_connection": 64, 00:14:11.913 "max_r2t_per_connection": 4, 00:14:11.913 "pdu_pool_size": 36864, 00:14:11.913 "immediate_data_pool_size": 16384, 00:14:11.913 "data_out_pool_size": 2048 00:14:11.913 } 00:14:11.913 } 00:14:11.913 ] 00:14:11.913 } 00:14:11.913 ] 00:14:11.913 }' 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71233 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71233 ']' 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71233 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71233 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:11.913 killing process with pid 71233 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71233' 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71233 00:14:11.913 21:45:30 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71233 00:14:13.301 [2024-09-29 21:45:31.925635] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:13.301 [2024-09-29 21:45:31.952543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:13.301 [2024-09-29 21:45:31.952729] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:13.301 [2024-09-29 21:45:31.960445] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:13.301 [2024-09-29 21:45:31.960514] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:13.301 [2024-09-29 21:45:31.960525] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:13.301 [2024-09-29 21:45:31.960562] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:13.301 [2024-09-29 21:45:31.960733] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71294 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71294 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71294 ']' 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:15.210 21:45:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:15.210 "subsystems": [ 00:14:15.210 { 00:14:15.210 "subsystem": "fsdev", 00:14:15.210 "config": [ 00:14:15.210 { 00:14:15.210 "method": "fsdev_set_opts", 00:14:15.210 "params": { 00:14:15.210 "fsdev_io_pool_size": 65535, 00:14:15.210 "fsdev_io_cache_size": 256 00:14:15.210 } 00:14:15.210 } 00:14:15.210 ] 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "subsystem": "keyring", 00:14:15.210 "config": [] 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "subsystem": "iobuf", 00:14:15.210 "config": [ 00:14:15.210 { 00:14:15.210 "method": "iobuf_set_options", 00:14:15.210 "params": { 00:14:15.210 "small_pool_count": 8192, 00:14:15.210 "large_pool_count": 1024, 00:14:15.210 "small_bufsize": 8192, 00:14:15.210 "large_bufsize": 135168 00:14:15.210 } 00:14:15.210 } 00:14:15.210 ] 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "subsystem": "sock", 00:14:15.210 "config": [ 00:14:15.210 { 00:14:15.210 "method": "sock_set_default_impl", 00:14:15.210 "params": { 00:14:15.210 "impl_name": "posix" 00:14:15.210 } 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "method": "sock_impl_set_options", 00:14:15.210 "params": { 00:14:15.210 "impl_name": "ssl", 00:14:15.210 "recv_buf_size": 4096, 00:14:15.210 "send_buf_size": 4096, 00:14:15.210 "enable_recv_pipe": true, 00:14:15.210 "enable_quickack": false, 00:14:15.210 "enable_placement_id": 0, 00:14:15.210 "enable_zerocopy_send_server": true, 00:14:15.210 "enable_zerocopy_send_client": false, 00:14:15.210 "zerocopy_threshold": 0, 00:14:15.210 "tls_version": 0, 00:14:15.210 "enable_ktls": false 00:14:15.210 } 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "method": "sock_impl_set_options", 00:14:15.210 "params": { 00:14:15.210 "impl_name": "posix", 00:14:15.210 "recv_buf_size": 2097152, 00:14:15.210 "send_buf_size": 2097152, 00:14:15.210 "enable_recv_pipe": true, 00:14:15.210 "enable_quickack": false, 00:14:15.210 "enable_placement_id": 0, 00:14:15.210 "enable_zerocopy_send_server": true, 00:14:15.210 "enable_zerocopy_send_client": false, 00:14:15.210 "zerocopy_threshold": 0, 00:14:15.210 "tls_version": 0, 00:14:15.210 "enable_ktls": false 00:14:15.210 } 00:14:15.210 } 00:14:15.210 ] 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "subsystem": "vmd", 00:14:15.210 "config": [] 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "subsystem": "accel", 00:14:15.210 "config": [ 00:14:15.210 { 00:14:15.210 "method": "accel_set_options", 00:14:15.210 "params": { 00:14:15.210 "small_cache_size": 128, 00:14:15.210 "large_cache_size": 16, 00:14:15.210 "task_count": 2048, 00:14:15.210 "sequence_count": 2048, 00:14:15.210 "buf_count": 2048 00:14:15.210 } 00:14:15.210 } 00:14:15.210 ] 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "subsystem": "bdev", 00:14:15.210 "config": [ 00:14:15.210 { 00:14:15.210 "method": "bdev_set_options", 00:14:15.210 "params": { 00:14:15.210 "bdev_io_pool_size": 65535, 00:14:15.210 "bdev_io_cache_size": 256, 00:14:15.210 "bdev_auto_examine": true, 00:14:15.210 "iobuf_small_cache_size": 128, 00:14:15.210 "iobuf_large_cache_size": 16 00:14:15.210 } 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "method": "bdev_raid_set_options", 00:14:15.210 "params": { 00:14:15.210 "process_window_size_kb": 1024, 00:14:15.210 "process_max_bandwidth_mb_sec": 0 00:14:15.210 } 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "method": "bdev_iscsi_set_options", 00:14:15.210 "params": { 00:14:15.210 "timeout_sec": 30 00:14:15.210 } 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "method": "bdev_nvme_set_options", 00:14:15.210 "params": { 00:14:15.210 "action_on_timeout": "none", 00:14:15.210 "timeout_us": 0, 00:14:15.210 "timeout_admin_us": 0, 00:14:15.210 "keep_alive_timeout_ms": 10000, 00:14:15.210 "arbitration_burst": 0, 00:14:15.210 "low_priority_weight": 0, 00:14:15.210 "medium_priority_weight": 0, 00:14:15.210 "high_priority_weight": 0, 00:14:15.210 "nvme_adminq_poll_period_us": 10000, 00:14:15.210 "nvme_ioq_poll_period_us": 0, 00:14:15.210 "io_queue_requests": 0, 00:14:15.210 "delay_cmd_submit": true, 00:14:15.210 "transport_retry_count": 4, 00:14:15.210 "bdev_retry_count": 3, 00:14:15.210 "transport_ack_timeout": 0, 00:14:15.210 "ctrlr_loss_timeout_sec": 0, 00:14:15.210 "reconnect_delay_sec": 0, 00:14:15.210 "fast_io_fail_timeout_sec": 0, 00:14:15.210 "disable_auto_failback": false, 00:14:15.210 "generate_uuids": false, 00:14:15.210 "transport_tos": 0, 00:14:15.210 "nvme_error_stat": false, 00:14:15.210 "rdma_srq_size": 0, 00:14:15.210 "io_path_stat": false, 00:14:15.210 "allow_accel_sequence": false, 00:14:15.210 "rdma_max_cq_size": 0, 00:14:15.210 "rdma_cm_event_timeout_ms": 0, 00:14:15.210 "dhchap_digests": [ 00:14:15.210 "sha256", 00:14:15.210 "sha384", 00:14:15.210 "sha512" 00:14:15.210 ], 00:14:15.210 "dhchap_dhgroups": [ 00:14:15.210 "null", 00:14:15.210 "ffdhe2048", 00:14:15.210 "ffdhe3072", 00:14:15.210 "ffdhe4096", 00:14:15.210 "ffdhe6144", 00:14:15.210 "ffdhe8192" 00:14:15.210 ] 00:14:15.210 } 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "method": "bdev_nvme_set_hotplug", 00:14:15.210 "params": { 00:14:15.210 "period_us": 100000, 00:14:15.210 "enable": false 00:14:15.210 } 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "method": "bdev_malloc_create", 00:14:15.210 "params": { 00:14:15.210 "name": "malloc0", 00:14:15.210 "num_blocks": 8192, 00:14:15.210 "block_size": 4096, 00:14:15.210 "physical_block_size": 4096, 00:14:15.210 "uuid": "7e51f4ea-eb85-4021-92b4-57f1aa06b22d", 00:14:15.210 "optimal_io_boundary": 0, 00:14:15.210 "md_size": 0, 00:14:15.210 "dif_type": 0, 00:14:15.210 "dif_is_head_of_md": false, 00:14:15.210 "dif_pi_format": 0 00:14:15.210 } 00:14:15.210 }, 00:14:15.211 { 00:14:15.211 "method": "bdev_wait_for_examine" 00:14:15.211 } 00:14:15.211 ] 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "subsystem": "scsi", 00:14:15.211 "config": null 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "subsystem": "scheduler", 00:14:15.211 "config": [ 00:14:15.211 { 00:14:15.211 "method": "framework_set_scheduler", 00:14:15.211 "params": { 00:14:15.211 "name": "static" 00:14:15.211 } 00:14:15.211 } 00:14:15.211 ] 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "subsystem": "vhost_scsi", 00:14:15.211 "config": [] 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "subsystem": "vhost_blk", 00:14:15.211 "config": [] 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "subsystem": "ublk", 00:14:15.211 "config": [ 00:14:15.211 { 00:14:15.211 "method": "ublk_create_target", 00:14:15.211 "params": { 00:14:15.211 "cpumask": "1" 00:14:15.211 } 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "method": "ublk_start_disk", 00:14:15.211 "params": { 00:14:15.211 "bdev_name": "malloc0", 00:14:15.211 "ublk_id": 0, 00:14:15.211 "num_queues": 1, 00:14:15.211 "queue_depth": 128 00:14:15.211 } 00:14:15.211 } 00:14:15.211 ] 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "subsystem": "nbd", 00:14:15.211 "config": [] 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "subsystem": "nvmf", 00:14:15.211 "config": [ 00:14:15.211 { 00:14:15.211 "method": "nvmf_set_config", 00:14:15.211 "params": { 00:14:15.211 "discovery_filter": "match_any", 00:14:15.211 "admin_cmd_passthru": { 00:14:15.211 "identify_ctrlr": false 00:14:15.211 }, 00:14:15.211 "dhchap_digests": [ 00:14:15.211 "sha256", 00:14:15.211 "sha384", 00:14:15.211 "sha512" 00:14:15.211 ], 00:14:15.211 "dhchap_dhgroups": [ 00:14:15.211 "null", 00:14:15.211 "ffdhe2048", 00:14:15.211 "ffdhe3072", 00:14:15.211 "ffdhe4096", 00:14:15.211 "ffdhe6144", 00:14:15.211 "ffdhe8192" 00:14:15.211 ] 00:14:15.211 } 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "method": "nvmf_set_max_subsystems", 00:14:15.211 "params": { 00:14:15.211 "max_subsystems": 1024 00:14:15.211 } 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "method": "nvmf_set_crdt", 00:14:15.211 "params": { 00:14:15.211 "crdt1": 0, 00:14:15.211 "crdt2": 0, 00:14:15.211 "crdt3": 0 00:14:15.211 } 00:14:15.211 } 00:14:15.211 ] 00:14:15.211 }, 00:14:15.211 { 00:14:15.211 "subsystem": "iscsi", 00:14:15.211 "config": [ 00:14:15.211 { 00:14:15.211 "method": "iscsi_set_options", 00:14:15.211 "params": { 00:14:15.211 "node_base": "iqn.2016-06.io.spdk", 00:14:15.211 "max_sessions": 128, 00:14:15.211 "max_connections_per_session": 2, 00:14:15.211 "max_queue_depth": 64, 00:14:15.211 "default_time2wait": 2, 00:14:15.211 "default_time2retain": 20, 00:14:15.211 "first_burst_length": 8192, 00:14:15.211 "immediate_data": true, 00:14:15.211 "allow_duplicated_isid": false, 00:14:15.211 "error_recovery_level": 0, 00:14:15.211 "nop_timeout": 60, 00:14:15.211 "nop_in_interval": 30, 00:14:15.211 "disable_chap": false, 00:14:15.211 "require_chap": false, 00:14:15.211 "mutual_chap": false, 00:14:15.211 "chap_group": 0, 00:14:15.211 "max_large_datain_per_connection": 64, 00:14:15.211 "max_r2t_per_connection": 4, 00:14:15.211 "pdu_pool_size": 36864, 00:14:15.211 "immediate_data_pool_size": 16384, 00:14:15.211 "data_out_pool_size": 2048 00:14:15.211 } 00:14:15.211 } 00:14:15.211 ] 00:14:15.211 } 00:14:15.211 ] 00:14:15.211 }' 00:14:15.211 [2024-09-29 21:45:33.895762] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:15.211 [2024-09-29 21:45:33.895883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71294 ] 00:14:15.211 [2024-09-29 21:45:34.041556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.470 [2024-09-29 21:45:34.214807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.037 [2024-09-29 21:45:34.895405] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:16.037 [2024-09-29 21:45:34.896079] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:16.037 [2024-09-29 21:45:34.903497] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:16.037 [2024-09-29 21:45:34.903562] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:16.037 [2024-09-29 21:45:34.903568] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:16.037 [2024-09-29 21:45:34.903574] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.037 [2024-09-29 21:45:34.912475] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.037 [2024-09-29 21:45:34.912494] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.037 [2024-09-29 21:45:34.919413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.037 [2024-09-29 21:45:34.919497] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:16.037 [2024-09-29 21:45:34.936426] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.037 21:45:34 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.037 21:45:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:16.037 21:45:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:16.037 21:45:34 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.037 21:45:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:16.037 21:45:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:16.037 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.037 21:45:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:16.037 21:45:35 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:16.037 21:45:35 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71294 00:14:16.037 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71294 ']' 00:14:16.037 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71294 00:14:16.037 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:16.037 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.037 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71294 00:14:16.296 killing process with pid 71294 00:14:16.296 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.296 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.296 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71294' 00:14:16.296 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71294 00:14:16.296 21:45:35 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71294 00:14:17.231 [2024-09-29 21:45:36.059907] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:17.231 [2024-09-29 21:45:36.091491] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:17.231 [2024-09-29 21:45:36.091594] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:17.231 [2024-09-29 21:45:36.099408] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:17.231 [2024-09-29 21:45:36.099451] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:17.231 [2024-09-29 21:45:36.099458] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:17.231 [2024-09-29 21:45:36.099485] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:17.231 [2024-09-29 21:45:36.099605] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:19.133 21:45:37 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:19.133 00:14:19.133 real 0m8.716s 00:14:19.133 user 0m5.685s 00:14:19.133 sys 0m3.693s 00:14:19.133 21:45:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.133 21:45:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:19.133 ************************************ 00:14:19.133 END TEST test_save_ublk_config 00:14:19.133 ************************************ 00:14:19.133 21:45:37 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71371 00:14:19.133 21:45:37 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:19.133 21:45:37 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71371 00:14:19.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.133 21:45:37 ublk -- common/autotest_common.sh@831 -- # '[' -z 71371 ']' 00:14:19.133 21:45:37 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.133 21:45:37 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:19.133 21:45:37 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.133 21:45:37 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.133 21:45:37 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.133 21:45:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:19.133 [2024-09-29 21:45:37.791004] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:19.133 [2024-09-29 21:45:37.791108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71371 ] 00:14:19.133 [2024-09-29 21:45:37.933716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:19.133 [2024-09-29 21:45:38.103730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.133 [2024-09-29 21:45:38.103832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.700 21:45:38 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.700 21:45:38 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:19.700 21:45:38 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:19.700 21:45:38 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:19.700 21:45:38 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.700 21:45:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:19.700 ************************************ 00:14:19.700 START TEST test_create_ublk 00:14:19.700 ************************************ 00:14:19.700 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:19.700 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:19.700 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.700 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:19.700 [2024-09-29 21:45:38.659408] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:19.700 [2024-09-29 21:45:38.660687] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:19.700 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.700 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:19.700 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:19.700 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.700 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:19.958 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.958 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:19.958 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:19.958 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.958 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:19.958 [2024-09-29 21:45:38.840556] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:19.958 [2024-09-29 21:45:38.840884] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:19.958 [2024-09-29 21:45:38.840893] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:19.958 [2024-09-29 21:45:38.840899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:19.958 [2024-09-29 21:45:38.851425] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:19.958 [2024-09-29 21:45:38.851442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:19.958 [2024-09-29 21:45:38.859409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:19.958 [2024-09-29 21:45:38.859925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:19.958 [2024-09-29 21:45:38.877422] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:19.958 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.958 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:19.958 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:19.958 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:19.958 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.958 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:19.958 21:45:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.958 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:19.958 { 00:14:19.958 "ublk_device": "/dev/ublkb0", 00:14:19.958 "id": 0, 00:14:19.958 "queue_depth": 512, 00:14:19.958 "num_queues": 4, 00:14:19.958 "bdev_name": "Malloc0" 00:14:19.958 } 00:14:19.958 ]' 00:14:19.958 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:19.958 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:19.958 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:20.217 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:20.217 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:20.217 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:20.217 21:45:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:20.217 21:45:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:20.217 21:45:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:20.217 21:45:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:20.217 21:45:39 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:20.217 21:45:39 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:20.217 fio: verification read phase will never start because write phase uses all of runtime 00:14:20.217 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:20.217 fio-3.35 00:14:20.217 Starting 1 process 00:14:32.421 00:14:32.421 fio_test: (groupid=0, jobs=1): err= 0: pid=71417: Sun Sep 29 21:45:49 2024 00:14:32.421 write: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(577MiB/10001msec); 0 zone resets 00:14:32.421 clat (usec): min=41, max=8007, avg=66.90, stdev=133.37 00:14:32.421 lat (usec): min=41, max=8012, avg=67.37, stdev=133.41 00:14:32.421 clat percentiles (usec): 00:14:32.421 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:14:32.421 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 60], 00:14:32.421 | 70.00th=[ 65], 80.00th=[ 69], 90.00th=[ 73], 95.00th=[ 76], 00:14:32.421 | 99.00th=[ 192], 99.50th=[ 249], 99.90th=[ 2868], 99.95th=[ 3490], 00:14:32.421 | 99.99th=[ 4146] 00:14:32.421 bw ( KiB/s): min=21912, max=66912, per=99.26%, avg=58615.16, stdev=10765.19, samples=19 00:14:32.421 iops : min= 5478, max=16728, avg=14653.79, stdev=2691.30, samples=19 00:14:32.421 lat (usec) : 50=6.76%, 100=91.97%, 250=0.78%, 500=0.27%, 750=0.01% 00:14:32.421 lat (usec) : 1000=0.01% 00:14:32.421 lat (msec) : 2=0.06%, 4=0.12%, 10=0.03% 00:14:32.421 cpu : usr=2.52%, sys=14.74%, ctx=147651, majf=0, minf=796 00:14:32.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:32.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.421 issued rwts: total=0,147649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:32.421 00:14:32.421 Run status group 0 (all jobs): 00:14:32.421 WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=577MiB (605MB), run=10001-10001msec 00:14:32.421 00:14:32.421 Disk stats (read/write): 00:14:32.421 ublkb0: ios=0/145889, merge=0/0, ticks=0/7860, in_queue=7861, util=99.10% 00:14:32.421 21:45:49 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:32.421 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.421 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.421 [2024-09-29 21:45:49.288182] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:32.421 [2024-09-29 21:45:49.335441] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:32.421 [2024-09-29 21:45:49.336042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:32.421 [2024-09-29 21:45:49.344433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:32.421 [2024-09-29 21:45:49.344728] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:32.421 [2024-09-29 21:45:49.344804] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:32.421 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.421 21:45:49 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:32.421 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:32.421 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:32.421 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:32.421 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.421 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 [2024-09-29 21:45:49.359472] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:32.422 request: 00:14:32.422 { 00:14:32.422 "ublk_id": 0, 00:14:32.422 "method": "ublk_stop_disk", 00:14:32.422 "req_id": 1 00:14:32.422 } 00:14:32.422 Got JSON-RPC error response 00:14:32.422 response: 00:14:32.422 { 00:14:32.422 "code": -19, 00:14:32.422 "message": "No such device" 00:14:32.422 } 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.422 21:45:49 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 [2024-09-29 21:45:49.375464] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:32.422 [2024-09-29 21:45:49.377299] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:32.422 [2024-09-29 21:45:49.377336] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:49 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:32.422 21:45:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:32.422 21:45:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:32.422 21:45:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:32.422 21:45:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:32.422 21:45:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:32.422 ************************************ 00:14:32.422 END TEST test_create_ublk 00:14:32.422 ************************************ 00:14:32.422 21:45:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:32.422 00:14:32.422 real 0m11.199s 00:14:32.422 user 0m0.558s 00:14:32.422 sys 0m1.553s 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.422 21:45:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 21:45:49 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:32.422 21:45:49 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:32.422 21:45:49 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.422 21:45:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 ************************************ 00:14:32.422 START TEST test_create_multi_ublk 00:14:32.422 ************************************ 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 [2024-09-29 21:45:49.899402] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:32.422 [2024-09-29 21:45:49.900732] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 [2024-09-29 21:45:50.139532] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:32.422 [2024-09-29 21:45:50.139856] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:32.422 [2024-09-29 21:45:50.139868] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:32.422 [2024-09-29 21:45:50.139877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:32.422 [2024-09-29 21:45:50.159410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:32.422 [2024-09-29 21:45:50.159432] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:32.422 [2024-09-29 21:45:50.171411] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:32.422 [2024-09-29 21:45:50.171947] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:32.422 [2024-09-29 21:45:50.198411] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 [2024-09-29 21:45:50.424518] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:32.422 [2024-09-29 21:45:50.424846] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:32.422 [2024-09-29 21:45:50.424859] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:32.422 [2024-09-29 21:45:50.424874] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:32.422 [2024-09-29 21:45:50.432422] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:32.422 [2024-09-29 21:45:50.432440] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:32.422 [2024-09-29 21:45:50.440413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:32.422 [2024-09-29 21:45:50.440937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:32.422 [2024-09-29 21:45:50.457423] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.422 [2024-09-29 21:45:50.632515] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:32.422 [2024-09-29 21:45:50.632839] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:32.422 [2024-09-29 21:45:50.632851] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:32.422 [2024-09-29 21:45:50.632858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:32.422 [2024-09-29 21:45:50.640417] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:32.422 [2024-09-29 21:45:50.640437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:32.422 [2024-09-29 21:45:50.648417] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:32.422 [2024-09-29 21:45:50.648943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:32.422 [2024-09-29 21:45:50.665420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:32.422 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.423 [2024-09-29 21:45:50.840519] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:32.423 [2024-09-29 21:45:50.840838] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:32.423 [2024-09-29 21:45:50.840850] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:32.423 [2024-09-29 21:45:50.840856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:32.423 [2024-09-29 21:45:50.848420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:32.423 [2024-09-29 21:45:50.848437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:32.423 [2024-09-29 21:45:50.856417] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:32.423 [2024-09-29 21:45:50.856939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:32.423 [2024-09-29 21:45:50.863439] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:32.423 { 00:14:32.423 "ublk_device": "/dev/ublkb0", 00:14:32.423 "id": 0, 00:14:32.423 "queue_depth": 512, 00:14:32.423 "num_queues": 4, 00:14:32.423 "bdev_name": "Malloc0" 00:14:32.423 }, 00:14:32.423 { 00:14:32.423 "ublk_device": "/dev/ublkb1", 00:14:32.423 "id": 1, 00:14:32.423 "queue_depth": 512, 00:14:32.423 "num_queues": 4, 00:14:32.423 "bdev_name": "Malloc1" 00:14:32.423 }, 00:14:32.423 { 00:14:32.423 "ublk_device": "/dev/ublkb2", 00:14:32.423 "id": 2, 00:14:32.423 "queue_depth": 512, 00:14:32.423 "num_queues": 4, 00:14:32.423 "bdev_name": "Malloc2" 00:14:32.423 }, 00:14:32.423 { 00:14:32.423 "ublk_device": "/dev/ublkb3", 00:14:32.423 "id": 3, 00:14:32.423 "queue_depth": 512, 00:14:32.423 "num_queues": 4, 00:14:32.423 "bdev_name": "Malloc3" 00:14:32.423 } 00:14:32.423 ]' 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:32.423 21:45:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:32.423 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.682 [2024-09-29 21:45:51.536486] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:32.682 [2024-09-29 21:45:51.568457] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:32.682 [2024-09-29 21:45:51.569273] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:32.682 [2024-09-29 21:45:51.577452] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:32.682 [2024-09-29 21:45:51.577706] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:32.682 [2024-09-29 21:45:51.577718] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.682 [2024-09-29 21:45:51.592468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:32.682 [2024-09-29 21:45:51.631449] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:32.682 [2024-09-29 21:45:51.632213] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:32.682 [2024-09-29 21:45:51.640425] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:32.682 [2024-09-29 21:45:51.640671] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:32.682 [2024-09-29 21:45:51.640686] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.682 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.682 [2024-09-29 21:45:51.648477] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:32.941 [2024-09-29 21:45:51.682978] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:32.941 [2024-09-29 21:45:51.683996] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:32.941 [2024-09-29 21:45:51.688420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:32.941 [2024-09-29 21:45:51.688657] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:32.941 [2024-09-29 21:45:51.688671] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:32.941 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.941 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:32.941 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:32.941 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.941 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:32.941 [2024-09-29 21:45:51.704467] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:32.941 [2024-09-29 21:45:51.748437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:32.941 [2024-09-29 21:45:51.749087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:32.941 [2024-09-29 21:45:51.756410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:32.941 [2024-09-29 21:45:51.756641] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:32.941 [2024-09-29 21:45:51.756653] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:32.941 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.941 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:33.199 [2024-09-29 21:45:51.948460] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:33.199 [2024-09-29 21:45:51.950508] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:33.199 [2024-09-29 21:45:51.950534] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:33.199 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:33.199 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:33.199 21:45:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:33.199 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.199 21:45:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:33.458 21:45:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.458 21:45:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:33.458 21:45:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:33.458 21:45:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.458 21:45:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.028 21:45:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:34.286 ************************************ 00:14:34.286 END TEST test_create_multi_ublk 00:14:34.286 ************************************ 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:34.286 00:14:34.286 real 0m3.315s 00:14:34.286 user 0m0.823s 00:14:34.286 sys 0m0.147s 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.286 21:45:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.286 21:45:53 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:34.286 21:45:53 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:34.286 21:45:53 ublk -- ublk/ublk.sh@130 -- # killprocess 71371 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@950 -- # '[' -z 71371 ']' 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@954 -- # kill -0 71371 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@955 -- # uname 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71371 00:14:34.286 killing process with pid 71371 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71371' 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@969 -- # kill 71371 00:14:34.286 21:45:53 ublk -- common/autotest_common.sh@974 -- # wait 71371 00:14:34.852 [2024-09-29 21:45:53.822382] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:34.852 [2024-09-29 21:45:53.822613] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:35.789 00:14:35.789 real 0m25.823s 00:14:35.789 user 0m35.625s 00:14:35.789 sys 0m10.359s 00:14:35.789 21:45:54 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:35.789 21:45:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:35.789 ************************************ 00:14:35.789 END TEST ublk 00:14:35.789 ************************************ 00:14:35.789 21:45:54 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:35.789 21:45:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:35.789 21:45:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.789 21:45:54 -- common/autotest_common.sh@10 -- # set +x 00:14:35.789 ************************************ 00:14:35.789 START TEST ublk_recovery 00:14:35.789 ************************************ 00:14:35.789 21:45:54 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:35.789 * Looking for test storage... 00:14:35.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:35.789 21:45:54 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:35.789 21:45:54 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:35.789 21:45:54 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:36.048 21:45:54 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:36.048 21:45:54 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.049 21:45:54 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:36.049 21:45:54 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:36.049 21:45:54 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.049 21:45:54 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:36.049 21:45:54 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.049 21:45:54 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.049 21:45:54 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.049 21:45:54 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:36.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.049 --rc genhtml_branch_coverage=1 00:14:36.049 --rc genhtml_function_coverage=1 00:14:36.049 --rc genhtml_legend=1 00:14:36.049 --rc geninfo_all_blocks=1 00:14:36.049 --rc geninfo_unexecuted_blocks=1 00:14:36.049 00:14:36.049 ' 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:36.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.049 --rc genhtml_branch_coverage=1 00:14:36.049 --rc genhtml_function_coverage=1 00:14:36.049 --rc genhtml_legend=1 00:14:36.049 --rc geninfo_all_blocks=1 00:14:36.049 --rc geninfo_unexecuted_blocks=1 00:14:36.049 00:14:36.049 ' 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:36.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.049 --rc genhtml_branch_coverage=1 00:14:36.049 --rc genhtml_function_coverage=1 00:14:36.049 --rc genhtml_legend=1 00:14:36.049 --rc geninfo_all_blocks=1 00:14:36.049 --rc geninfo_unexecuted_blocks=1 00:14:36.049 00:14:36.049 ' 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:36.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.049 --rc genhtml_branch_coverage=1 00:14:36.049 --rc genhtml_function_coverage=1 00:14:36.049 --rc genhtml_legend=1 00:14:36.049 --rc geninfo_all_blocks=1 00:14:36.049 --rc geninfo_unexecuted_blocks=1 00:14:36.049 00:14:36.049 ' 00:14:36.049 21:45:54 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:36.049 21:45:54 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:36.049 21:45:54 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:36.049 21:45:54 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:36.049 21:45:54 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:36.049 21:45:54 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:36.049 21:45:54 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:36.049 21:45:54 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:36.049 21:45:54 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:36.049 21:45:54 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:36.049 21:45:54 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71764 00:14:36.049 21:45:54 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.049 21:45:54 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:36.049 21:45:54 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71764 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71764 ']' 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.049 21:45:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.049 [2024-09-29 21:45:54.887787] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:36.049 [2024-09-29 21:45:54.888060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71764 ] 00:14:36.307 [2024-09-29 21:45:55.036983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:36.307 [2024-09-29 21:45:55.204931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.307 [2024-09-29 21:45:55.205033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:36.874 21:45:55 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.874 [2024-09-29 21:45:55.736408] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:36.874 [2024-09-29 21:45:55.737686] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.874 21:45:55 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.874 malloc0 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.874 21:45:55 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.874 21:45:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.874 [2024-09-29 21:45:55.832523] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:36.874 [2024-09-29 21:45:55.832625] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:36.874 [2024-09-29 21:45:55.832634] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:36.874 [2024-09-29 21:45:55.832641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:36.874 [2024-09-29 21:45:55.841513] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:36.875 [2024-09-29 21:45:55.841530] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:36.875 [2024-09-29 21:45:55.846073] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:36.875 [2024-09-29 21:45:55.846212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:37.133 [2024-09-29 21:45:55.867876] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:37.133 1 00:14:37.133 21:45:55 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.133 21:45:55 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:38.067 21:45:56 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71799 00:14:38.067 21:45:56 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:38.067 21:45:56 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:38.067 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.067 fio-3.35 00:14:38.067 Starting 1 process 00:14:43.362 21:46:01 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71764 00:14:43.362 21:46:01 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:48.655 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71764 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:48.655 21:46:06 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71909 00:14:48.655 21:46:06 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.655 21:46:06 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71909 00:14:48.655 21:46:06 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71909 ']' 00:14:48.655 21:46:06 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.655 21:46:06 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.655 21:46:06 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:48.655 21:46:06 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.655 21:46:06 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.655 21:46:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.655 [2024-09-29 21:46:06.964531] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:48.655 [2024-09-29 21:46:06.964821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71909 ] 00:14:48.655 [2024-09-29 21:46:07.113418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:48.655 [2024-09-29 21:46:07.283041] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.655 [2024-09-29 21:46:07.283138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.914 21:46:07 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.914 21:46:07 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:48.914 21:46:07 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:48.914 21:46:07 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.914 21:46:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:48.914 [2024-09-29 21:46:07.844408] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:48.914 [2024-09-29 21:46:07.845664] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:48.914 21:46:07 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.914 21:46:07 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:48.914 21:46:07 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.914 21:46:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:49.172 malloc0 00:14:49.172 21:46:07 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.172 21:46:07 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:49.172 21:46:07 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.172 21:46:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:49.172 [2024-09-29 21:46:07.936768] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:49.172 [2024-09-29 21:46:07.936802] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:49.172 [2024-09-29 21:46:07.936811] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:49.172 [2024-09-29 21:46:07.944428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:49.172 [2024-09-29 21:46:07.944448] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:49.172 [2024-09-29 21:46:07.944456] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:49.172 [2024-09-29 21:46:07.944527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:49.172 1 00:14:49.172 21:46:07 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.172 21:46:07 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71799 00:14:49.172 [2024-09-29 21:46:07.952409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:49.172 [2024-09-29 21:46:07.956122] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:49.172 [2024-09-29 21:46:07.966583] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:49.172 [2024-09-29 21:46:07.966600] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:45.391 00:15:45.391 fio_test: (groupid=0, jobs=1): err= 0: pid=71802: Sun Sep 29 21:46:57 2024 00:15:45.391 read: IOPS=24.8k, BW=96.9MiB/s (102MB/s)(5816MiB/60002msec) 00:15:45.391 slat (nsec): min=1281, max=468442, avg=5575.26, stdev=1505.11 00:15:45.391 clat (usec): min=958, max=6096.2k, avg=2529.75, stdev=39945.29 00:15:45.391 lat (usec): min=964, max=6096.2k, avg=2535.32, stdev=39945.28 00:15:45.391 clat percentiles (usec): 00:15:45.391 | 1.00th=[ 1893], 5.00th=[ 2040], 10.00th=[ 2073], 20.00th=[ 2114], 00:15:45.391 | 30.00th=[ 2114], 40.00th=[ 2147], 50.00th=[ 2147], 60.00th=[ 2180], 00:15:45.391 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2278], 95.00th=[ 3097], 00:15:45.391 | 99.00th=[ 4948], 99.50th=[ 5342], 99.90th=[ 6718], 99.95th=[ 7832], 00:15:45.391 | 99.99th=[12518] 00:15:45.391 bw ( KiB/s): min=12704, max=114560, per=100.00%, avg=109326.00, stdev=12449.66, samples=108 00:15:45.391 iops : min= 3176, max=28640, avg=27331.50, stdev=3112.41, samples=108 00:15:45.391 write: IOPS=24.8k, BW=96.8MiB/s (102MB/s)(5809MiB/60002msec); 0 zone resets 00:15:45.391 slat (nsec): min=1537, max=426110, avg=5796.05, stdev=1463.32 00:15:45.391 clat (usec): min=791, max=6096.4k, avg=2619.48, stdev=39971.49 00:15:45.391 lat (usec): min=796, max=6096.4k, avg=2625.27, stdev=39971.49 00:15:45.391 clat percentiles (usec): 00:15:45.391 | 1.00th=[ 1942], 5.00th=[ 2147], 10.00th=[ 2180], 20.00th=[ 2212], 00:15:45.391 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2245], 60.00th=[ 2278], 00:15:45.391 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2376], 95.00th=[ 3032], 00:15:45.391 | 99.00th=[ 4948], 99.50th=[ 5407], 99.90th=[ 6849], 99.95th=[ 8029], 00:15:45.391 | 99.99th=[12518] 00:15:45.391 bw ( KiB/s): min=12912, max=113544, per=100.00%, avg=109201.48, stdev=12362.22, samples=108 00:15:45.391 iops : min= 3228, max=28386, avg=27300.37, stdev=3090.55, samples=108 00:15:45.391 lat (usec) : 1000=0.01% 00:15:45.391 lat (msec) : 2=1.90%, 4=95.64%, 10=2.44%, 20=0.01%, >=2000=0.01% 00:15:45.391 cpu : usr=5.56%, sys=28.99%, ctx=97696, majf=0, minf=14 00:15:45.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:45.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:45.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:45.391 issued rwts: total=1488894,1487042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:45.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:45.391 00:15:45.391 Run status group 0 (all jobs): 00:15:45.391 READ: bw=96.9MiB/s (102MB/s), 96.9MiB/s-96.9MiB/s (102MB/s-102MB/s), io=5816MiB (6099MB), run=60002-60002msec 00:15:45.391 WRITE: bw=96.8MiB/s (102MB/s), 96.8MiB/s-96.8MiB/s (102MB/s-102MB/s), io=5809MiB (6091MB), run=60002-60002msec 00:15:45.391 00:15:45.391 Disk stats (read/write): 00:15:45.391 ublkb1: ios=1485739/1483997, merge=0/0, ticks=3674616/3674555, in_queue=7349172, util=99.90% 00:15:45.391 21:46:57 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.391 [2024-09-29 21:46:57.142088] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:45.391 [2024-09-29 21:46:57.192432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:45.391 [2024-09-29 21:46:57.196543] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:45.391 [2024-09-29 21:46:57.207430] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:45.391 [2024-09-29 21:46:57.207613] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:45.391 [2024-09-29 21:46:57.207678] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.391 21:46:57 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.391 [2024-09-29 21:46:57.215483] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:45.391 [2024-09-29 21:46:57.217582] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:45.391 [2024-09-29 21:46:57.217613] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.391 21:46:57 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:45.391 21:46:57 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:45.391 21:46:57 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71909 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 71909 ']' 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 71909 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71909 00:15:45.391 killing process with pid 71909 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71909' 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@969 -- # kill 71909 00:15:45.391 21:46:57 ublk_recovery -- common/autotest_common.sh@974 -- # wait 71909 00:15:45.391 [2024-09-29 21:46:58.318958] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:45.391 [2024-09-29 21:46:58.319014] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:45.391 ************************************ 00:15:45.391 END TEST ublk_recovery 00:15:45.391 ************************************ 00:15:45.391 00:15:45.391 real 1m4.487s 00:15:45.391 user 1m38.979s 00:15:45.391 sys 0m39.725s 00:15:45.391 21:46:59 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:45.391 21:46:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:45.391 21:46:59 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:45.391 21:46:59 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:45.391 21:46:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:45.391 21:46:59 -- common/autotest_common.sh@10 -- # set +x 00:15:45.391 21:46:59 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:45.391 21:46:59 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:45.391 21:46:59 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:45.392 21:46:59 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:45.392 21:46:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:45.392 21:46:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:45.392 21:46:59 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:45.392 21:46:59 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:45.392 21:46:59 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:45.392 21:46:59 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:45.392 21:46:59 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:45.392 21:46:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:45.392 21:46:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.392 21:46:59 -- common/autotest_common.sh@10 -- # set +x 00:15:45.392 ************************************ 00:15:45.392 START TEST ftl 00:15:45.392 ************************************ 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:45.392 * Looking for test storage... 00:15:45.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:45.392 21:46:59 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.392 21:46:59 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.392 21:46:59 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.392 21:46:59 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.392 21:46:59 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.392 21:46:59 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.392 21:46:59 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.392 21:46:59 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.392 21:46:59 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.392 21:46:59 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.392 21:46:59 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.392 21:46:59 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:45.392 21:46:59 ftl -- scripts/common.sh@345 -- # : 1 00:15:45.392 21:46:59 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.392 21:46:59 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.392 21:46:59 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:45.392 21:46:59 ftl -- scripts/common.sh@353 -- # local d=1 00:15:45.392 21:46:59 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.392 21:46:59 ftl -- scripts/common.sh@355 -- # echo 1 00:15:45.392 21:46:59 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.392 21:46:59 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:45.392 21:46:59 ftl -- scripts/common.sh@353 -- # local d=2 00:15:45.392 21:46:59 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.392 21:46:59 ftl -- scripts/common.sh@355 -- # echo 2 00:15:45.392 21:46:59 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.392 21:46:59 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.392 21:46:59 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.392 21:46:59 ftl -- scripts/common.sh@368 -- # return 0 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:45.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.392 --rc genhtml_branch_coverage=1 00:15:45.392 --rc genhtml_function_coverage=1 00:15:45.392 --rc genhtml_legend=1 00:15:45.392 --rc geninfo_all_blocks=1 00:15:45.392 --rc geninfo_unexecuted_blocks=1 00:15:45.392 00:15:45.392 ' 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:45.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.392 --rc genhtml_branch_coverage=1 00:15:45.392 --rc genhtml_function_coverage=1 00:15:45.392 --rc genhtml_legend=1 00:15:45.392 --rc geninfo_all_blocks=1 00:15:45.392 --rc geninfo_unexecuted_blocks=1 00:15:45.392 00:15:45.392 ' 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:45.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.392 --rc genhtml_branch_coverage=1 00:15:45.392 --rc genhtml_function_coverage=1 00:15:45.392 --rc genhtml_legend=1 00:15:45.392 --rc geninfo_all_blocks=1 00:15:45.392 --rc geninfo_unexecuted_blocks=1 00:15:45.392 00:15:45.392 ' 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:45.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.392 --rc genhtml_branch_coverage=1 00:15:45.392 --rc genhtml_function_coverage=1 00:15:45.392 --rc genhtml_legend=1 00:15:45.392 --rc geninfo_all_blocks=1 00:15:45.392 --rc geninfo_unexecuted_blocks=1 00:15:45.392 00:15:45.392 ' 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:45.392 21:46:59 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:45.392 21:46:59 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:45.392 21:46:59 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:45.392 21:46:59 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:45.392 21:46:59 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:45.392 21:46:59 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.392 21:46:59 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:45.392 21:46:59 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:45.392 21:46:59 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.392 21:46:59 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.392 21:46:59 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:45.392 21:46:59 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:45.392 21:46:59 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:45.392 21:46:59 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:45.392 21:46:59 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:45.392 21:46:59 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:45.392 21:46:59 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.392 21:46:59 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.392 21:46:59 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:45.392 21:46:59 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:45.392 21:46:59 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:45.392 21:46:59 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:45.392 21:46:59 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:45.392 21:46:59 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:45.392 21:46:59 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:45.392 21:46:59 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:45.392 21:46:59 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:45.392 21:46:59 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:45.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:45.392 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:45.392 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:45.392 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:45.392 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72719 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72719 00:15:45.392 21:46:59 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@831 -- # '[' -z 72719 ']' 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:45.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:45.392 21:46:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:45.392 [2024-09-29 21:46:59.915083] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:45.392 [2024-09-29 21:46:59.915210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72719 ] 00:15:45.392 [2024-09-29 21:47:00.064150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.392 [2024-09-29 21:47:00.261811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.392 21:47:00 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:45.392 21:47:00 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:45.392 21:47:00 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:45.392 21:47:00 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:45.392 21:47:01 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:45.392 21:47:01 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@50 -- # break 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:45.392 21:47:02 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:45.393 21:47:02 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:45.393 21:47:02 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:45.393 21:47:02 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:45.393 21:47:02 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:45.393 21:47:02 ftl -- ftl/ftl.sh@63 -- # break 00:15:45.393 21:47:02 ftl -- ftl/ftl.sh@66 -- # killprocess 72719 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@950 -- # '[' -z 72719 ']' 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@954 -- # kill -0 72719 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@955 -- # uname 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72719 00:15:45.393 killing process with pid 72719 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72719' 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@969 -- # kill 72719 00:15:45.393 21:47:02 ftl -- common/autotest_common.sh@974 -- # wait 72719 00:15:45.393 21:47:03 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:45.393 21:47:03 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:45.393 21:47:03 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:45.393 21:47:03 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.393 21:47:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:45.393 ************************************ 00:15:45.393 START TEST ftl_fio_basic 00:15:45.393 ************************************ 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:45.393 * Looking for test storage... 00:15:45.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.393 --rc genhtml_branch_coverage=1 00:15:45.393 --rc genhtml_function_coverage=1 00:15:45.393 --rc genhtml_legend=1 00:15:45.393 --rc geninfo_all_blocks=1 00:15:45.393 --rc geninfo_unexecuted_blocks=1 00:15:45.393 00:15:45.393 ' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.393 --rc genhtml_branch_coverage=1 00:15:45.393 --rc genhtml_function_coverage=1 00:15:45.393 --rc genhtml_legend=1 00:15:45.393 --rc geninfo_all_blocks=1 00:15:45.393 --rc geninfo_unexecuted_blocks=1 00:15:45.393 00:15:45.393 ' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.393 --rc genhtml_branch_coverage=1 00:15:45.393 --rc genhtml_function_coverage=1 00:15:45.393 --rc genhtml_legend=1 00:15:45.393 --rc geninfo_all_blocks=1 00:15:45.393 --rc geninfo_unexecuted_blocks=1 00:15:45.393 00:15:45.393 ' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:45.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.393 --rc genhtml_branch_coverage=1 00:15:45.393 --rc genhtml_function_coverage=1 00:15:45.393 --rc genhtml_legend=1 00:15:45.393 --rc geninfo_all_blocks=1 00:15:45.393 --rc geninfo_unexecuted_blocks=1 00:15:45.393 00:15:45.393 ' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72850 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72850 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72850 ']' 00:15:45.393 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.394 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:45.394 21:47:04 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:45.394 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.394 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:45.394 21:47:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:45.394 [2024-09-29 21:47:04.234711] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:45.394 [2024-09-29 21:47:04.234945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72850 ] 00:15:45.652 [2024-09-29 21:47:04.377685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:45.652 [2024-09-29 21:47:04.548570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.652 [2024-09-29 21:47:04.548711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.652 [2024-09-29 21:47:04.548724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.220 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.220 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:46.220 21:47:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:46.220 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:46.220 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:46.220 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:46.220 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:46.220 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:46.480 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:46.480 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:46.480 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:46.480 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:46.480 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:46.480 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:46.480 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:46.480 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:46.739 { 00:15:46.739 "name": "nvme0n1", 00:15:46.739 "aliases": [ 00:15:46.739 "86370e0d-272f-4b27-95eb-ef0fe269ebcd" 00:15:46.739 ], 00:15:46.739 "product_name": "NVMe disk", 00:15:46.739 "block_size": 4096, 00:15:46.739 "num_blocks": 1310720, 00:15:46.739 "uuid": "86370e0d-272f-4b27-95eb-ef0fe269ebcd", 00:15:46.739 "numa_id": -1, 00:15:46.739 "assigned_rate_limits": { 00:15:46.739 "rw_ios_per_sec": 0, 00:15:46.739 "rw_mbytes_per_sec": 0, 00:15:46.739 "r_mbytes_per_sec": 0, 00:15:46.739 "w_mbytes_per_sec": 0 00:15:46.739 }, 00:15:46.739 "claimed": false, 00:15:46.739 "zoned": false, 00:15:46.739 "supported_io_types": { 00:15:46.739 "read": true, 00:15:46.739 "write": true, 00:15:46.739 "unmap": true, 00:15:46.739 "flush": true, 00:15:46.739 "reset": true, 00:15:46.739 "nvme_admin": true, 00:15:46.739 "nvme_io": true, 00:15:46.739 "nvme_io_md": false, 00:15:46.739 "write_zeroes": true, 00:15:46.739 "zcopy": false, 00:15:46.739 "get_zone_info": false, 00:15:46.739 "zone_management": false, 00:15:46.739 "zone_append": false, 00:15:46.739 "compare": true, 00:15:46.739 "compare_and_write": false, 00:15:46.739 "abort": true, 00:15:46.739 "seek_hole": false, 00:15:46.739 "seek_data": false, 00:15:46.739 "copy": true, 00:15:46.739 "nvme_iov_md": false 00:15:46.739 }, 00:15:46.739 "driver_specific": { 00:15:46.739 "nvme": [ 00:15:46.739 { 00:15:46.739 "pci_address": "0000:00:11.0", 00:15:46.739 "trid": { 00:15:46.739 "trtype": "PCIe", 00:15:46.739 "traddr": "0000:00:11.0" 00:15:46.739 }, 00:15:46.739 "ctrlr_data": { 00:15:46.739 "cntlid": 0, 00:15:46.739 "vendor_id": "0x1b36", 00:15:46.739 "model_number": "QEMU NVMe Ctrl", 00:15:46.739 "serial_number": "12341", 00:15:46.739 "firmware_revision": "8.0.0", 00:15:46.739 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:46.739 "oacs": { 00:15:46.739 "security": 0, 00:15:46.739 "format": 1, 00:15:46.739 "firmware": 0, 00:15:46.739 "ns_manage": 1 00:15:46.739 }, 00:15:46.739 "multi_ctrlr": false, 00:15:46.739 "ana_reporting": false 00:15:46.739 }, 00:15:46.739 "vs": { 00:15:46.739 "nvme_version": "1.4" 00:15:46.739 }, 00:15:46.739 "ns_data": { 00:15:46.739 "id": 1, 00:15:46.739 "can_share": false 00:15:46.739 } 00:15:46.739 } 00:15:46.739 ], 00:15:46.739 "mp_policy": "active_passive" 00:15:46.739 } 00:15:46.739 } 00:15:46.739 ]' 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:46.739 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:46.998 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:46.998 21:47:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=2eb9f130-f1b2-4918-88ae-270e4fdd5461 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2eb9f130-f1b2-4918-88ae-270e4fdd5461 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:47.257 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:47.515 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:47.515 { 00:15:47.515 "name": "30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133", 00:15:47.515 "aliases": [ 00:15:47.515 "lvs/nvme0n1p0" 00:15:47.515 ], 00:15:47.515 "product_name": "Logical Volume", 00:15:47.515 "block_size": 4096, 00:15:47.515 "num_blocks": 26476544, 00:15:47.515 "uuid": "30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133", 00:15:47.515 "assigned_rate_limits": { 00:15:47.515 "rw_ios_per_sec": 0, 00:15:47.515 "rw_mbytes_per_sec": 0, 00:15:47.515 "r_mbytes_per_sec": 0, 00:15:47.515 "w_mbytes_per_sec": 0 00:15:47.515 }, 00:15:47.515 "claimed": false, 00:15:47.515 "zoned": false, 00:15:47.515 "supported_io_types": { 00:15:47.515 "read": true, 00:15:47.515 "write": true, 00:15:47.515 "unmap": true, 00:15:47.515 "flush": false, 00:15:47.515 "reset": true, 00:15:47.515 "nvme_admin": false, 00:15:47.515 "nvme_io": false, 00:15:47.515 "nvme_io_md": false, 00:15:47.515 "write_zeroes": true, 00:15:47.515 "zcopy": false, 00:15:47.515 "get_zone_info": false, 00:15:47.515 "zone_management": false, 00:15:47.515 "zone_append": false, 00:15:47.515 "compare": false, 00:15:47.515 "compare_and_write": false, 00:15:47.515 "abort": false, 00:15:47.515 "seek_hole": true, 00:15:47.515 "seek_data": true, 00:15:47.515 "copy": false, 00:15:47.515 "nvme_iov_md": false 00:15:47.515 }, 00:15:47.515 "driver_specific": { 00:15:47.515 "lvol": { 00:15:47.515 "lvol_store_uuid": "2eb9f130-f1b2-4918-88ae-270e4fdd5461", 00:15:47.515 "base_bdev": "nvme0n1", 00:15:47.515 "thin_provision": true, 00:15:47.515 "num_allocated_clusters": 0, 00:15:47.515 "snapshot": false, 00:15:47.515 "clone": false, 00:15:47.515 "esnap_clone": false 00:15:47.515 } 00:15:47.515 } 00:15:47.515 } 00:15:47.515 ]' 00:15:47.515 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:47.515 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:47.515 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:47.774 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:47.774 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:47.774 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:47.774 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:47.774 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:47.774 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:48.033 { 00:15:48.033 "name": "30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133", 00:15:48.033 "aliases": [ 00:15:48.033 "lvs/nvme0n1p0" 00:15:48.033 ], 00:15:48.033 "product_name": "Logical Volume", 00:15:48.033 "block_size": 4096, 00:15:48.033 "num_blocks": 26476544, 00:15:48.033 "uuid": "30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133", 00:15:48.033 "assigned_rate_limits": { 00:15:48.033 "rw_ios_per_sec": 0, 00:15:48.033 "rw_mbytes_per_sec": 0, 00:15:48.033 "r_mbytes_per_sec": 0, 00:15:48.033 "w_mbytes_per_sec": 0 00:15:48.033 }, 00:15:48.033 "claimed": false, 00:15:48.033 "zoned": false, 00:15:48.033 "supported_io_types": { 00:15:48.033 "read": true, 00:15:48.033 "write": true, 00:15:48.033 "unmap": true, 00:15:48.033 "flush": false, 00:15:48.033 "reset": true, 00:15:48.033 "nvme_admin": false, 00:15:48.033 "nvme_io": false, 00:15:48.033 "nvme_io_md": false, 00:15:48.033 "write_zeroes": true, 00:15:48.033 "zcopy": false, 00:15:48.033 "get_zone_info": false, 00:15:48.033 "zone_management": false, 00:15:48.033 "zone_append": false, 00:15:48.033 "compare": false, 00:15:48.033 "compare_and_write": false, 00:15:48.033 "abort": false, 00:15:48.033 "seek_hole": true, 00:15:48.033 "seek_data": true, 00:15:48.033 "copy": false, 00:15:48.033 "nvme_iov_md": false 00:15:48.033 }, 00:15:48.033 "driver_specific": { 00:15:48.033 "lvol": { 00:15:48.033 "lvol_store_uuid": "2eb9f130-f1b2-4918-88ae-270e4fdd5461", 00:15:48.033 "base_bdev": "nvme0n1", 00:15:48.033 "thin_provision": true, 00:15:48.033 "num_allocated_clusters": 0, 00:15:48.033 "snapshot": false, 00:15:48.033 "clone": false, 00:15:48.033 "esnap_clone": false 00:15:48.033 } 00:15:48.033 } 00:15:48.033 } 00:15:48.033 ]' 00:15:48.033 21:47:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:48.033 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:48.033 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:48.292 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:48.292 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:48.551 { 00:15:48.551 "name": "30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133", 00:15:48.551 "aliases": [ 00:15:48.551 "lvs/nvme0n1p0" 00:15:48.551 ], 00:15:48.551 "product_name": "Logical Volume", 00:15:48.551 "block_size": 4096, 00:15:48.551 "num_blocks": 26476544, 00:15:48.551 "uuid": "30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133", 00:15:48.551 "assigned_rate_limits": { 00:15:48.551 "rw_ios_per_sec": 0, 00:15:48.551 "rw_mbytes_per_sec": 0, 00:15:48.551 "r_mbytes_per_sec": 0, 00:15:48.551 "w_mbytes_per_sec": 0 00:15:48.551 }, 00:15:48.551 "claimed": false, 00:15:48.551 "zoned": false, 00:15:48.551 "supported_io_types": { 00:15:48.551 "read": true, 00:15:48.551 "write": true, 00:15:48.551 "unmap": true, 00:15:48.551 "flush": false, 00:15:48.551 "reset": true, 00:15:48.551 "nvme_admin": false, 00:15:48.551 "nvme_io": false, 00:15:48.551 "nvme_io_md": false, 00:15:48.551 "write_zeroes": true, 00:15:48.551 "zcopy": false, 00:15:48.551 "get_zone_info": false, 00:15:48.551 "zone_management": false, 00:15:48.551 "zone_append": false, 00:15:48.551 "compare": false, 00:15:48.551 "compare_and_write": false, 00:15:48.551 "abort": false, 00:15:48.551 "seek_hole": true, 00:15:48.551 "seek_data": true, 00:15:48.551 "copy": false, 00:15:48.551 "nvme_iov_md": false 00:15:48.551 }, 00:15:48.551 "driver_specific": { 00:15:48.551 "lvol": { 00:15:48.551 "lvol_store_uuid": "2eb9f130-f1b2-4918-88ae-270e4fdd5461", 00:15:48.551 "base_bdev": "nvme0n1", 00:15:48.551 "thin_provision": true, 00:15:48.551 "num_allocated_clusters": 0, 00:15:48.551 "snapshot": false, 00:15:48.551 "clone": false, 00:15:48.551 "esnap_clone": false 00:15:48.551 } 00:15:48.551 } 00:15:48.551 } 00:15:48.551 ]' 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:48.551 21:47:07 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133 -c nvc0n1p0 --l2p_dram_limit 60 00:15:48.813 [2024-09-29 21:47:07.615514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.813 [2024-09-29 21:47:07.615632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:48.813 [2024-09-29 21:47:07.615654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:48.813 [2024-09-29 21:47:07.615661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.813 [2024-09-29 21:47:07.615712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.813 [2024-09-29 21:47:07.615719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:48.813 [2024-09-29 21:47:07.615728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:48.813 [2024-09-29 21:47:07.615736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.813 [2024-09-29 21:47:07.615779] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:48.813 [2024-09-29 21:47:07.616272] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:48.813 [2024-09-29 21:47:07.616294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.813 [2024-09-29 21:47:07.616300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:48.814 [2024-09-29 21:47:07.616309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:15:48.814 [2024-09-29 21:47:07.616315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.814 [2024-09-29 21:47:07.616445] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 28e4dd14-81a9-4fc5-984c-0351e183f62d 00:15:48.814 [2024-09-29 21:47:07.617690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.814 [2024-09-29 21:47:07.617719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:48.814 [2024-09-29 21:47:07.617729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:48.814 [2024-09-29 21:47:07.617739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.814 [2024-09-29 21:47:07.624431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.814 [2024-09-29 21:47:07.624458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:48.814 [2024-09-29 21:47:07.624466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.626 ms 00:15:48.814 [2024-09-29 21:47:07.624474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.814 [2024-09-29 21:47:07.624556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.814 [2024-09-29 21:47:07.624566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:48.814 [2024-09-29 21:47:07.624573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:15:48.814 [2024-09-29 21:47:07.624584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.814 [2024-09-29 21:47:07.624626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.814 [2024-09-29 21:47:07.624637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:48.814 [2024-09-29 21:47:07.624644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:48.814 [2024-09-29 21:47:07.624651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.814 [2024-09-29 21:47:07.624675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:48.814 [2024-09-29 21:47:07.627880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.814 [2024-09-29 21:47:07.627904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:48.814 [2024-09-29 21:47:07.627914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:15:48.814 [2024-09-29 21:47:07.627921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.814 [2024-09-29 21:47:07.627956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.814 [2024-09-29 21:47:07.627963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:48.814 [2024-09-29 21:47:07.627972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:48.814 [2024-09-29 21:47:07.627978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.814 [2024-09-29 21:47:07.628021] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:48.814 [2024-09-29 21:47:07.628141] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:48.814 [2024-09-29 21:47:07.628159] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:48.814 [2024-09-29 21:47:07.628169] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:48.814 [2024-09-29 21:47:07.628178] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628188] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:48.814 [2024-09-29 21:47:07.628202] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:48.814 [2024-09-29 21:47:07.628209] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:48.814 [2024-09-29 21:47:07.628215] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:48.814 [2024-09-29 21:47:07.628223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.814 [2024-09-29 21:47:07.628230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:48.814 [2024-09-29 21:47:07.628238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:15:48.814 [2024-09-29 21:47:07.628244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.814 [2024-09-29 21:47:07.628320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.814 [2024-09-29 21:47:07.628331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:48.814 [2024-09-29 21:47:07.628341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:48.814 [2024-09-29 21:47:07.628347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.814 [2024-09-29 21:47:07.628455] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:48.814 [2024-09-29 21:47:07.628465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:48.814 [2024-09-29 21:47:07.628473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:48.814 [2024-09-29 21:47:07.628493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:48.814 [2024-09-29 21:47:07.628513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:48.814 [2024-09-29 21:47:07.628525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:48.814 [2024-09-29 21:47:07.628530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:48.814 [2024-09-29 21:47:07.628537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:48.814 [2024-09-29 21:47:07.628542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:48.814 [2024-09-29 21:47:07.628549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:48.814 [2024-09-29 21:47:07.628554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:48.814 [2024-09-29 21:47:07.628567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:48.814 [2024-09-29 21:47:07.628589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:48.814 [2024-09-29 21:47:07.628606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:48.814 [2024-09-29 21:47:07.628625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:48.814 [2024-09-29 21:47:07.628644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:48.814 [2024-09-29 21:47:07.628664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:48.814 [2024-09-29 21:47:07.628677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:48.814 [2024-09-29 21:47:07.628683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:48.814 [2024-09-29 21:47:07.628689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:48.814 [2024-09-29 21:47:07.628694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:48.814 [2024-09-29 21:47:07.628701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:48.814 [2024-09-29 21:47:07.628718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:48.814 [2024-09-29 21:47:07.628730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:48.814 [2024-09-29 21:47:07.628737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628741] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:48.814 [2024-09-29 21:47:07.628752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:48.814 [2024-09-29 21:47:07.628758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:48.814 [2024-09-29 21:47:07.628773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:48.814 [2024-09-29 21:47:07.628781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:48.814 [2024-09-29 21:47:07.628786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:48.814 [2024-09-29 21:47:07.628792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:48.814 [2024-09-29 21:47:07.628798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:48.814 [2024-09-29 21:47:07.628805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:48.814 [2024-09-29 21:47:07.628813] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:48.814 [2024-09-29 21:47:07.628825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:48.814 [2024-09-29 21:47:07.628831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:48.814 [2024-09-29 21:47:07.628838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:48.814 [2024-09-29 21:47:07.628844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:48.815 [2024-09-29 21:47:07.628850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:48.815 [2024-09-29 21:47:07.628858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:48.815 [2024-09-29 21:47:07.628865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:48.815 [2024-09-29 21:47:07.628870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:48.815 [2024-09-29 21:47:07.628878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:48.815 [2024-09-29 21:47:07.628883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:48.815 [2024-09-29 21:47:07.628893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:48.815 [2024-09-29 21:47:07.628898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:48.815 [2024-09-29 21:47:07.628906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:48.815 [2024-09-29 21:47:07.628911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:48.815 [2024-09-29 21:47:07.628918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:48.815 [2024-09-29 21:47:07.628924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:48.815 [2024-09-29 21:47:07.628931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:48.815 [2024-09-29 21:47:07.628937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:48.815 [2024-09-29 21:47:07.628944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:48.815 [2024-09-29 21:47:07.628949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:48.815 [2024-09-29 21:47:07.628957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:48.815 [2024-09-29 21:47:07.628963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:48.815 [2024-09-29 21:47:07.628970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:48.815 [2024-09-29 21:47:07.628976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:15:48.815 [2024-09-29 21:47:07.628983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:48.815 [2024-09-29 21:47:07.629044] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:48.815 [2024-09-29 21:47:07.629058] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:51.358 [2024-09-29 21:47:10.142249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.142475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:51.358 [2024-09-29 21:47:10.142550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2513.189 ms 00:15:51.358 [2024-09-29 21:47:10.142579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.177800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.177973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:51.358 [2024-09-29 21:47:10.178040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.992 ms 00:15:51.358 [2024-09-29 21:47:10.178068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.178263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.178303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:51.358 [2024-09-29 21:47:10.178369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:15:51.358 [2024-09-29 21:47:10.178415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.212578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.212710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:51.358 [2024-09-29 21:47:10.213026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.075 ms 00:15:51.358 [2024-09-29 21:47:10.213075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.213155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.213374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:51.358 [2024-09-29 21:47:10.213424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:51.358 [2024-09-29 21:47:10.213448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.213911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.214021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:51.358 [2024-09-29 21:47:10.214079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:15:51.358 [2024-09-29 21:47:10.214119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.214305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.214343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:51.358 [2024-09-29 21:47:10.214404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:15:51.358 [2024-09-29 21:47:10.214433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.230592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.230698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:51.358 [2024-09-29 21:47:10.230751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.116 ms 00:15:51.358 [2024-09-29 21:47:10.230779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.243323] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:51.358 [2024-09-29 21:47:10.260775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.260882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:51.358 [2024-09-29 21:47:10.260972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.891 ms 00:15:51.358 [2024-09-29 21:47:10.260999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.317549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.317673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:51.358 [2024-09-29 21:47:10.317733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.500 ms 00:15:51.358 [2024-09-29 21:47:10.317757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.358 [2024-09-29 21:47:10.317958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.358 [2024-09-29 21:47:10.317988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:51.358 [2024-09-29 21:47:10.318042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:15:51.358 [2024-09-29 21:47:10.318117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.342542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-29 21:47:10.342663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:51.684 [2024-09-29 21:47:10.342720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.337 ms 00:15:51.684 [2024-09-29 21:47:10.342746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.366442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-29 21:47:10.366559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:51.684 [2024-09-29 21:47:10.366639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.356 ms 00:15:51.684 [2024-09-29 21:47:10.366972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.367640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-29 21:47:10.367745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:51.684 [2024-09-29 21:47:10.367807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:15:51.684 [2024-09-29 21:47:10.367819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.437781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-29 21:47:10.437818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:51.684 [2024-09-29 21:47:10.437835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.890 ms 00:15:51.684 [2024-09-29 21:47:10.437844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.463773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-29 21:47:10.463809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:51.684 [2024-09-29 21:47:10.463825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.837 ms 00:15:51.684 [2024-09-29 21:47:10.463833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.487774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-29 21:47:10.487888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:51.684 [2024-09-29 21:47:10.487908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.893 ms 00:15:51.684 [2024-09-29 21:47:10.487916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.512262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-29 21:47:10.512293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:51.684 [2024-09-29 21:47:10.512306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.305 ms 00:15:51.684 [2024-09-29 21:47:10.512313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.512362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-29 21:47:10.512372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:51.684 [2024-09-29 21:47:10.512398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:51.684 [2024-09-29 21:47:10.512407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.512495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.684 [2024-09-29 21:47:10.512506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:51.684 [2024-09-29 21:47:10.512516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:51.684 [2024-09-29 21:47:10.512525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.684 [2024-09-29 21:47:10.513545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2897.515 ms, result 0 00:15:51.684 { 00:15:51.684 "name": "ftl0", 00:15:51.684 "uuid": "28e4dd14-81a9-4fc5-984c-0351e183f62d" 00:15:51.684 } 00:15:51.684 21:47:10 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:51.684 21:47:10 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:51.684 21:47:10 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:51.684 21:47:10 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:51.684 21:47:10 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:51.684 21:47:10 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:51.684 21:47:10 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:51.980 21:47:10 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:51.980 [ 00:15:51.980 { 00:15:51.980 "name": "ftl0", 00:15:51.980 "aliases": [ 00:15:51.980 "28e4dd14-81a9-4fc5-984c-0351e183f62d" 00:15:51.980 ], 00:15:51.980 "product_name": "FTL disk", 00:15:51.980 "block_size": 4096, 00:15:51.980 "num_blocks": 20971520, 00:15:51.980 "uuid": "28e4dd14-81a9-4fc5-984c-0351e183f62d", 00:15:51.980 "assigned_rate_limits": { 00:15:51.980 "rw_ios_per_sec": 0, 00:15:51.980 "rw_mbytes_per_sec": 0, 00:15:51.980 "r_mbytes_per_sec": 0, 00:15:51.980 "w_mbytes_per_sec": 0 00:15:51.980 }, 00:15:51.980 "claimed": false, 00:15:51.980 "zoned": false, 00:15:51.980 "supported_io_types": { 00:15:51.980 "read": true, 00:15:51.980 "write": true, 00:15:51.980 "unmap": true, 00:15:51.980 "flush": true, 00:15:51.980 "reset": false, 00:15:51.980 "nvme_admin": false, 00:15:51.980 "nvme_io": false, 00:15:51.980 "nvme_io_md": false, 00:15:51.980 "write_zeroes": true, 00:15:51.980 "zcopy": false, 00:15:51.980 "get_zone_info": false, 00:15:51.980 "zone_management": false, 00:15:51.980 "zone_append": false, 00:15:51.980 "compare": false, 00:15:51.980 "compare_and_write": false, 00:15:51.980 "abort": false, 00:15:51.980 "seek_hole": false, 00:15:51.980 "seek_data": false, 00:15:51.980 "copy": false, 00:15:51.980 "nvme_iov_md": false 00:15:51.980 }, 00:15:51.980 "driver_specific": { 00:15:51.980 "ftl": { 00:15:51.980 "base_bdev": "30c6a7c0-13ff-4bbf-8fc6-96f3eccfd133", 00:15:51.980 "cache": "nvc0n1p0" 00:15:51.980 } 00:15:51.980 } 00:15:51.980 } 00:15:51.980 ] 00:15:51.980 21:47:10 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:51.980 21:47:10 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:51.980 21:47:10 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:52.239 21:47:11 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:52.239 21:47:11 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:52.499 [2024-09-29 21:47:11.330136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.330179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:52.499 [2024-09-29 21:47:11.330191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:52.499 [2024-09-29 21:47:11.330199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.330231] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:52.499 [2024-09-29 21:47:11.332508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.332534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:52.499 [2024-09-29 21:47:11.332545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.261 ms 00:15:52.499 [2024-09-29 21:47:11.332551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.332957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.332971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:52.499 [2024-09-29 21:47:11.332980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:15:52.499 [2024-09-29 21:47:11.332986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.335472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.335490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:52.499 [2024-09-29 21:47:11.335501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.465 ms 00:15:52.499 [2024-09-29 21:47:11.335508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.340187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.340210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:52.499 [2024-09-29 21:47:11.340220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.656 ms 00:15:52.499 [2024-09-29 21:47:11.340227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.358892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.358918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:52.499 [2024-09-29 21:47:11.358929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.583 ms 00:15:52.499 [2024-09-29 21:47:11.358935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.371507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.371535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:52.499 [2024-09-29 21:47:11.371546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.529 ms 00:15:52.499 [2024-09-29 21:47:11.371554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.371716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.371725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:52.499 [2024-09-29 21:47:11.371734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:15:52.499 [2024-09-29 21:47:11.371742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.389568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.389685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:52.499 [2024-09-29 21:47:11.389700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.803 ms 00:15:52.499 [2024-09-29 21:47:11.389706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.407244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.407268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:52.499 [2024-09-29 21:47:11.407278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.500 ms 00:15:52.499 [2024-09-29 21:47:11.407283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.424002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.424025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:52.499 [2024-09-29 21:47:11.424034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.678 ms 00:15:52.499 [2024-09-29 21:47:11.424040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.441116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.499 [2024-09-29 21:47:11.441139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:52.499 [2024-09-29 21:47:11.441148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.992 ms 00:15:52.499 [2024-09-29 21:47:11.441154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.499 [2024-09-29 21:47:11.441191] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:52.499 [2024-09-29 21:47:11.441204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:52.499 [2024-09-29 21:47:11.441484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:52.500 [2024-09-29 21:47:11.441922] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:52.500 [2024-09-29 21:47:11.441929] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 28e4dd14-81a9-4fc5-984c-0351e183f62d 00:15:52.500 [2024-09-29 21:47:11.441936] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:52.500 [2024-09-29 21:47:11.441945] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:52.500 [2024-09-29 21:47:11.441950] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:52.500 [2024-09-29 21:47:11.441958] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:52.500 [2024-09-29 21:47:11.441963] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:52.500 [2024-09-29 21:47:11.441970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:52.500 [2024-09-29 21:47:11.441976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:52.500 [2024-09-29 21:47:11.441982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:52.500 [2024-09-29 21:47:11.441987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:52.500 [2024-09-29 21:47:11.441994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.500 [2024-09-29 21:47:11.442000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:52.500 [2024-09-29 21:47:11.442008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:15:52.500 [2024-09-29 21:47:11.442015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.500 [2024-09-29 21:47:11.452304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.500 [2024-09-29 21:47:11.452417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:52.500 [2024-09-29 21:47:11.452435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.256 ms 00:15:52.500 [2024-09-29 21:47:11.452442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.500 [2024-09-29 21:47:11.452750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.500 [2024-09-29 21:47:11.452761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:52.500 [2024-09-29 21:47:11.452770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:15:52.500 [2024-09-29 21:47:11.452777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.759 [2024-09-29 21:47:11.490158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.759 [2024-09-29 21:47:11.490189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:52.760 [2024-09-29 21:47:11.490200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.490207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.490256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.490265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:52.760 [2024-09-29 21:47:11.490273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.490280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.490363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.490372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:52.760 [2024-09-29 21:47:11.490381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.490401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.490425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.490431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:52.760 [2024-09-29 21:47:11.490441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.490447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.557535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.557576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:52.760 [2024-09-29 21:47:11.557587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.557594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.609266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.609305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:52.760 [2024-09-29 21:47:11.609319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.609326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.609427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.609437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:52.760 [2024-09-29 21:47:11.609445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.609451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.609529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.609539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:52.760 [2024-09-29 21:47:11.609546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.609552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.609650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.609658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:52.760 [2024-09-29 21:47:11.609666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.609672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.609719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.609726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:52.760 [2024-09-29 21:47:11.609735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.609741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.609786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.609793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:52.760 [2024-09-29 21:47:11.609801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.609806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.609855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.760 [2024-09-29 21:47:11.609863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:52.760 [2024-09-29 21:47:11.609871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.760 [2024-09-29 21:47:11.609878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.760 [2024-09-29 21:47:11.610031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 279.869 ms, result 0 00:15:52.760 true 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72850 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72850 ']' 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72850 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72850 00:15:52.760 killing process with pid 72850 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72850' 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72850 00:15:52.760 21:47:11 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72850 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:59.343 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:59.344 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:59.344 21:47:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:59.344 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:59.344 fio-3.35 00:15:59.344 Starting 1 thread 00:16:04.627 00:16:04.627 test: (groupid=0, jobs=1): err= 0: pid=73032: Sun Sep 29 21:47:23 2024 00:16:04.627 read: IOPS=888, BW=59.0MiB/s (61.9MB/s)(255MiB/4313msec) 00:16:04.627 slat (nsec): min=4226, max=30306, avg=5931.89, stdev=2141.98 00:16:04.627 clat (usec): min=275, max=8307, avg=515.87, stdev=236.94 00:16:04.627 lat (usec): min=280, max=8312, avg=521.80, stdev=237.13 00:16:04.627 clat percentiles (usec): 00:16:04.627 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 334], 00:16:04.627 | 30.00th=[ 343], 40.00th=[ 404], 50.00th=[ 469], 60.00th=[ 537], 00:16:04.627 | 70.00th=[ 611], 80.00th=[ 685], 90.00th=[ 807], 95.00th=[ 832], 00:16:04.627 | 99.00th=[ 906], 99.50th=[ 947], 99.90th=[ 2147], 99.95th=[ 5080], 00:16:04.627 | 99.99th=[ 8291] 00:16:04.627 write: IOPS=894, BW=59.4MiB/s (62.3MB/s)(256MiB/4310msec); 0 zone resets 00:16:04.627 slat (nsec): min=14584, max=57741, avg=19962.59, stdev=3747.92 00:16:04.627 clat (usec): min=293, max=1376, avg=566.49, stdev=193.83 00:16:04.627 lat (usec): min=319, max=1395, avg=586.46, stdev=194.44 00:16:04.627 clat percentiles (usec): 00:16:04.627 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 359], 00:16:04.627 | 30.00th=[ 371], 40.00th=[ 490], 50.00th=[ 562], 60.00th=[ 627], 00:16:04.627 | 70.00th=[ 693], 80.00th=[ 758], 90.00th=[ 848], 95.00th=[ 906], 00:16:04.627 | 99.00th=[ 996], 99.50th=[ 1057], 99.90th=[ 1221], 99.95th=[ 1352], 00:16:04.627 | 99.99th=[ 1385] 00:16:04.627 bw ( KiB/s): min=48008, max=84864, per=97.30%, avg=59194.00, stdev=12744.62, samples=8 00:16:04.627 iops : min= 706, max= 1248, avg=870.50, stdev=187.42, samples=8 00:16:04.627 lat (usec) : 500=49.75%, 750=33.05%, 1000=16.60% 00:16:04.627 lat (msec) : 2=0.56%, 4=0.03%, 10=0.03% 00:16:04.627 cpu : usr=99.23%, sys=0.05%, ctx=8, majf=0, minf=1169 00:16:04.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:04.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.627 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:04.627 00:16:04.627 Run status group 0 (all jobs): 00:16:04.627 READ: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=255MiB (267MB), run=4313-4313msec 00:16:04.627 WRITE: bw=59.4MiB/s (62.3MB/s), 59.4MiB/s-59.4MiB/s (62.3MB/s-62.3MB/s), io=256MiB (269MB), run=4310-4310msec 00:16:06.541 ----------------------------------------------------- 00:16:06.541 Suppressions used: 00:16:06.541 count bytes template 00:16:06.541 1 5 /usr/src/fio/parse.c 00:16:06.541 1 8 libtcmalloc_minimal.so 00:16:06.541 1 904 libcrypto.so 00:16:06.541 ----------------------------------------------------- 00:16:06.541 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:06.541 21:47:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:06.541 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:06.541 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:06.541 fio-3.35 00:16:06.541 Starting 2 threads 00:16:33.100 00:16:33.100 first_half: (groupid=0, jobs=1): err= 0: pid=73141: Sun Sep 29 21:47:49 2024 00:16:33.100 read: IOPS=2831, BW=11.1MiB/s (11.6MB/s)(256MiB/23118msec) 00:16:33.100 slat (nsec): min=2955, max=22670, avg=5284.95, stdev=933.71 00:16:33.100 clat (usec): min=1592, max=323404, avg=38388.63, stdev=26193.56 00:16:33.100 lat (usec): min=1598, max=323408, avg=38393.91, stdev=26193.60 00:16:33.100 clat percentiles (msec): 00:16:33.100 | 1.00th=[ 13], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 31], 00:16:33.100 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:16:33.100 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 46], 95.00th=[ 81], 00:16:33.100 | 99.00th=[ 163], 99.50th=[ 199], 99.90th=[ 288], 99.95th=[ 317], 00:16:33.100 | 99.99th=[ 321] 00:16:33.100 write: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(256MiB/23068msec); 0 zone resets 00:16:33.100 slat (usec): min=3, max=332, avg= 6.59, stdev= 3.15 00:16:33.100 clat (usec): min=392, max=61170, avg=6782.55, stdev=7605.37 00:16:33.100 lat (usec): min=403, max=61174, avg=6789.15, stdev=7605.43 00:16:33.100 clat percentiles (usec): 00:16:33.100 | 1.00th=[ 734], 5.00th=[ 898], 10.00th=[ 1270], 20.00th=[ 2671], 00:16:33.100 | 30.00th=[ 3589], 40.00th=[ 4359], 50.00th=[ 5080], 60.00th=[ 5669], 00:16:33.100 | 70.00th=[ 6194], 80.00th=[ 8225], 90.00th=[11076], 95.00th=[24773], 00:16:33.100 | 99.00th=[40109], 99.50th=[51643], 99.90th=[57934], 99.95th=[58459], 00:16:33.100 | 99.99th=[59507] 00:16:33.100 bw ( KiB/s): min= 6672, max=44816, per=100.00%, avg=26038.80, stdev=12332.23, samples=20 00:16:33.100 iops : min= 1668, max=11204, avg=6509.70, stdev=3083.06, samples=20 00:16:33.100 lat (usec) : 500=0.02%, 750=0.64%, 1000=2.78% 00:16:33.100 lat (msec) : 2=3.89%, 4=10.50%, 10=25.05%, 20=5.60%, 50=46.73% 00:16:33.100 lat (msec) : 100=2.90%, 250=1.77%, 500=0.12% 00:16:33.100 cpu : usr=99.35%, sys=0.10%, ctx=40, majf=0, minf=5517 00:16:33.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:33.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.100 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.100 issued rwts: total=65468,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.100 second_half: (groupid=0, jobs=1): err= 0: pid=73142: Sun Sep 29 21:47:49 2024 00:16:33.100 read: IOPS=2867, BW=11.2MiB/s (11.7MB/s)(256MiB/22834msec) 00:16:33.100 slat (usec): min=2, max=189, avg= 4.56, stdev= 1.46 00:16:33.100 clat (msec): min=7, max=313, avg=38.34, stdev=22.97 00:16:33.100 lat (msec): min=7, max=313, avg=38.35, stdev=22.97 00:16:33.100 clat percentiles (msec): 00:16:33.100 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:16:33.100 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:16:33.100 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 46], 95.00th=[ 81], 00:16:33.100 | 99.00th=[ 157], 99.50th=[ 174], 99.90th=[ 234], 99.95th=[ 249], 00:16:33.101 | 99.99th=[ 296] 00:16:33.101 write: IOPS=2888, BW=11.3MiB/s (11.8MB/s)(256MiB/22687msec); 0 zone resets 00:16:33.101 slat (usec): min=3, max=330, avg= 5.89, stdev= 3.36 00:16:33.101 clat (usec): min=361, max=64097, avg=6263.90, stdev=5372.95 00:16:33.101 lat (usec): min=383, max=64105, avg=6269.79, stdev=5373.17 00:16:33.101 clat percentiles (usec): 00:16:33.101 | 1.00th=[ 889], 5.00th=[ 1778], 10.00th=[ 2442], 20.00th=[ 3294], 00:16:33.101 | 30.00th=[ 4015], 40.00th=[ 4752], 50.00th=[ 5211], 60.00th=[ 5604], 00:16:33.101 | 70.00th=[ 6259], 80.00th=[ 8848], 90.00th=[10814], 95.00th=[12125], 00:16:33.101 | 99.00th=[30278], 99.50th=[45876], 99.90th=[60031], 99.95th=[61604], 00:16:33.101 | 99.99th=[63177] 00:16:33.101 bw ( KiB/s): min= 264, max=48632, per=100.00%, avg=27399.32, stdev=15941.19, samples=19 00:16:33.101 iops : min= 66, max=12158, avg=6849.79, stdev=3985.29, samples=19 00:16:33.101 lat (usec) : 500=0.03%, 750=0.17%, 1000=0.68% 00:16:33.101 lat (msec) : 2=2.10%, 4=11.90%, 10=27.92%, 20=6.52%, 50=46.07% 00:16:33.101 lat (msec) : 100=2.92%, 250=1.67%, 500=0.02% 00:16:33.101 cpu : usr=99.23%, sys=0.11%, ctx=40, majf=0, minf=5588 00:16:33.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:33.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.101 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.101 issued rwts: total=65487,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.101 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.101 00:16:33.101 Run status group 0 (all jobs): 00:16:33.101 READ: bw=22.1MiB/s (23.2MB/s), 11.1MiB/s-11.2MiB/s (11.6MB/s-11.7MB/s), io=512MiB (536MB), run=22834-23118msec 00:16:33.101 WRITE: bw=22.2MiB/s (23.3MB/s), 11.1MiB/s-11.3MiB/s (11.6MB/s-11.8MB/s), io=512MiB (537MB), run=22687-23068msec 00:16:33.101 ----------------------------------------------------- 00:16:33.101 Suppressions used: 00:16:33.101 count bytes template 00:16:33.101 2 10 /usr/src/fio/parse.c 00:16:33.101 3 288 /usr/src/fio/iolog.c 00:16:33.101 1 8 libtcmalloc_minimal.so 00:16:33.101 1 904 libcrypto.so 00:16:33.101 ----------------------------------------------------- 00:16:33.101 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:33.101 21:47:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:33.101 21:47:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:33.101 21:47:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:33.101 21:47:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:33.101 21:47:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:33.101 21:47:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.364 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:33.364 fio-3.35 00:16:33.364 Starting 1 thread 00:16:48.260 00:16:48.260 test: (groupid=0, jobs=1): err= 0: pid=73457: Sun Sep 29 21:48:06 2024 00:16:48.260 read: IOPS=7686, BW=30.0MiB/s (31.5MB/s)(255MiB/8483msec) 00:16:48.260 slat (usec): min=3, max=108, avg= 4.86, stdev= 1.14 00:16:48.260 clat (usec): min=536, max=45668, avg=16644.05, stdev=2291.76 00:16:48.260 lat (usec): min=540, max=45673, avg=16648.91, stdev=2291.79 00:16:48.260 clat percentiles (usec): 00:16:48.260 | 1.00th=[14484], 5.00th=[15401], 10.00th=[15664], 20.00th=[15795], 00:16:48.260 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16188], 60.00th=[16319], 00:16:48.260 | 70.00th=[16450], 80.00th=[16581], 90.00th=[16909], 95.00th=[21103], 00:16:48.260 | 99.00th=[26346], 99.50th=[30278], 99.90th=[37487], 99.95th=[41681], 00:16:48.260 | 99.99th=[44827] 00:16:48.260 write: IOPS=13.5k, BW=52.6MiB/s (55.2MB/s)(256MiB/4863msec); 0 zone resets 00:16:48.260 slat (usec): min=4, max=143, avg= 7.63, stdev= 3.03 00:16:48.260 clat (usec): min=492, max=47168, avg=9454.75, stdev=10187.82 00:16:48.260 lat (usec): min=499, max=47175, avg=9462.38, stdev=10187.91 00:16:48.260 clat percentiles (usec): 00:16:48.260 | 1.00th=[ 652], 5.00th=[ 742], 10.00th=[ 816], 20.00th=[ 963], 00:16:48.260 | 30.00th=[ 1106], 40.00th=[ 1450], 50.00th=[ 5735], 60.00th=[ 8455], 00:16:48.260 | 70.00th=[12780], 80.00th=[16450], 90.00th=[28705], 95.00th=[30802], 00:16:48.260 | 99.00th=[33817], 99.50th=[36963], 99.90th=[40109], 99.95th=[41157], 00:16:48.260 | 99.99th=[45876] 00:16:48.260 bw ( KiB/s): min=33608, max=71120, per=97.26%, avg=52428.80, stdev=14113.74, samples=10 00:16:48.260 iops : min= 8402, max=17780, avg=13107.20, stdev=3528.44, samples=10 00:16:48.260 lat (usec) : 500=0.01%, 750=2.78%, 1000=8.73% 00:16:48.260 lat (msec) : 2=9.11%, 4=0.53%, 10=10.54%, 20=57.25%, 50=11.05% 00:16:48.260 cpu : usr=99.00%, sys=0.26%, ctx=29, majf=0, minf=5565 00:16:48.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:48.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.260 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.260 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.260 00:16:48.260 Run status group 0 (all jobs): 00:16:48.260 READ: bw=30.0MiB/s (31.5MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=255MiB (267MB), run=8483-8483msec 00:16:48.260 WRITE: bw=52.6MiB/s (55.2MB/s), 52.6MiB/s-52.6MiB/s (55.2MB/s-55.2MB/s), io=256MiB (268MB), run=4863-4863msec 00:16:49.633 ----------------------------------------------------- 00:16:49.633 Suppressions used: 00:16:49.633 count bytes template 00:16:49.633 1 5 /usr/src/fio/parse.c 00:16:49.633 2 192 /usr/src/fio/iolog.c 00:16:49.633 1 8 libtcmalloc_minimal.so 00:16:49.633 1 904 libcrypto.so 00:16:49.633 ----------------------------------------------------- 00:16:49.633 00:16:49.633 21:48:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:49.633 21:48:08 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:49.633 21:48:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:49.892 Remove shared memory files 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57492 /dev/shm/spdk_tgt_trace.pid71764 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:49.892 ************************************ 00:16:49.892 END TEST ftl_fio_basic 00:16:49.892 ************************************ 00:16:49.892 00:16:49.892 real 1m4.647s 00:16:49.892 user 2m20.061s 00:16:49.892 sys 0m3.042s 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.892 21:48:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:49.892 21:48:08 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:49.892 21:48:08 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:49.892 21:48:08 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.892 21:48:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:49.892 ************************************ 00:16:49.892 START TEST ftl_bdevperf 00:16:49.892 ************************************ 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:49.892 * Looking for test storage... 00:16:49.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.892 --rc genhtml_branch_coverage=1 00:16:49.892 --rc genhtml_function_coverage=1 00:16:49.892 --rc genhtml_legend=1 00:16:49.892 --rc geninfo_all_blocks=1 00:16:49.892 --rc geninfo_unexecuted_blocks=1 00:16:49.892 00:16:49.892 ' 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.892 --rc genhtml_branch_coverage=1 00:16:49.892 --rc genhtml_function_coverage=1 00:16:49.892 --rc genhtml_legend=1 00:16:49.892 --rc geninfo_all_blocks=1 00:16:49.892 --rc geninfo_unexecuted_blocks=1 00:16:49.892 00:16:49.892 ' 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.892 --rc genhtml_branch_coverage=1 00:16:49.892 --rc genhtml_function_coverage=1 00:16:49.892 --rc genhtml_legend=1 00:16:49.892 --rc geninfo_all_blocks=1 00:16:49.892 --rc geninfo_unexecuted_blocks=1 00:16:49.892 00:16:49.892 ' 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.892 --rc genhtml_branch_coverage=1 00:16:49.892 --rc genhtml_function_coverage=1 00:16:49.892 --rc genhtml_legend=1 00:16:49.892 --rc geninfo_all_blocks=1 00:16:49.892 --rc geninfo_unexecuted_blocks=1 00:16:49.892 00:16:49.892 ' 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:49.892 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73699 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73699 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73699 ']' 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.893 21:48:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:50.151 [2024-09-29 21:48:08.905966] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:50.151 [2024-09-29 21:48:08.906656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73699 ] 00:16:50.151 [2024-09-29 21:48:09.049561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.409 [2024-09-29 21:48:09.228633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.974 21:48:09 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.974 21:48:09 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:50.974 21:48:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:50.974 21:48:09 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:50.974 21:48:09 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:50.974 21:48:09 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:50.974 21:48:09 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:50.974 21:48:09 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:51.233 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:51.233 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:51.233 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:51.233 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:51.233 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:51.233 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:51.233 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:51.233 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:51.491 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:51.491 { 00:16:51.491 "name": "nvme0n1", 00:16:51.491 "aliases": [ 00:16:51.491 "aef5ba18-d4c7-4885-96be-b97ff702b90b" 00:16:51.491 ], 00:16:51.491 "product_name": "NVMe disk", 00:16:51.491 "block_size": 4096, 00:16:51.491 "num_blocks": 1310720, 00:16:51.491 "uuid": "aef5ba18-d4c7-4885-96be-b97ff702b90b", 00:16:51.491 "numa_id": -1, 00:16:51.491 "assigned_rate_limits": { 00:16:51.491 "rw_ios_per_sec": 0, 00:16:51.491 "rw_mbytes_per_sec": 0, 00:16:51.491 "r_mbytes_per_sec": 0, 00:16:51.491 "w_mbytes_per_sec": 0 00:16:51.491 }, 00:16:51.491 "claimed": true, 00:16:51.491 "claim_type": "read_many_write_one", 00:16:51.491 "zoned": false, 00:16:51.491 "supported_io_types": { 00:16:51.491 "read": true, 00:16:51.491 "write": true, 00:16:51.491 "unmap": true, 00:16:51.491 "flush": true, 00:16:51.491 "reset": true, 00:16:51.491 "nvme_admin": true, 00:16:51.491 "nvme_io": true, 00:16:51.491 "nvme_io_md": false, 00:16:51.491 "write_zeroes": true, 00:16:51.491 "zcopy": false, 00:16:51.491 "get_zone_info": false, 00:16:51.491 "zone_management": false, 00:16:51.491 "zone_append": false, 00:16:51.491 "compare": true, 00:16:51.491 "compare_and_write": false, 00:16:51.491 "abort": true, 00:16:51.491 "seek_hole": false, 00:16:51.491 "seek_data": false, 00:16:51.491 "copy": true, 00:16:51.491 "nvme_iov_md": false 00:16:51.491 }, 00:16:51.491 "driver_specific": { 00:16:51.491 "nvme": [ 00:16:51.491 { 00:16:51.491 "pci_address": "0000:00:11.0", 00:16:51.491 "trid": { 00:16:51.491 "trtype": "PCIe", 00:16:51.491 "traddr": "0000:00:11.0" 00:16:51.491 }, 00:16:51.491 "ctrlr_data": { 00:16:51.491 "cntlid": 0, 00:16:51.491 "vendor_id": "0x1b36", 00:16:51.491 "model_number": "QEMU NVMe Ctrl", 00:16:51.491 "serial_number": "12341", 00:16:51.491 "firmware_revision": "8.0.0", 00:16:51.491 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:51.491 "oacs": { 00:16:51.491 "security": 0, 00:16:51.491 "format": 1, 00:16:51.491 "firmware": 0, 00:16:51.491 "ns_manage": 1 00:16:51.491 }, 00:16:51.491 "multi_ctrlr": false, 00:16:51.491 "ana_reporting": false 00:16:51.491 }, 00:16:51.491 "vs": { 00:16:51.491 "nvme_version": "1.4" 00:16:51.491 }, 00:16:51.491 "ns_data": { 00:16:51.491 "id": 1, 00:16:51.491 "can_share": false 00:16:51.491 } 00:16:51.491 } 00:16:51.491 ], 00:16:51.491 "mp_policy": "active_passive" 00:16:51.491 } 00:16:51.491 } 00:16:51.491 ]' 00:16:51.491 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:51.491 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:51.491 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:51.491 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:51.491 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:51.491 21:48:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:51.492 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:51.492 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:51.492 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:51.492 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:51.492 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:51.751 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=2eb9f130-f1b2-4918-88ae-270e4fdd5461 00:16:51.751 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:51.751 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2eb9f130-f1b2-4918-88ae-270e4fdd5461 00:16:52.010 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:52.010 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=0fa46aaf-ca5b-44bf-9a49-b881653154a4 00:16:52.010 21:48:10 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0fa46aaf-ca5b-44bf-9a49-b881653154a4 00:16:52.268 21:48:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:52.269 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:52.527 { 00:16:52.527 "name": "432d814e-0149-4d70-95f9-f70f6756a6c7", 00:16:52.527 "aliases": [ 00:16:52.527 "lvs/nvme0n1p0" 00:16:52.527 ], 00:16:52.527 "product_name": "Logical Volume", 00:16:52.527 "block_size": 4096, 00:16:52.527 "num_blocks": 26476544, 00:16:52.527 "uuid": "432d814e-0149-4d70-95f9-f70f6756a6c7", 00:16:52.527 "assigned_rate_limits": { 00:16:52.527 "rw_ios_per_sec": 0, 00:16:52.527 "rw_mbytes_per_sec": 0, 00:16:52.527 "r_mbytes_per_sec": 0, 00:16:52.527 "w_mbytes_per_sec": 0 00:16:52.527 }, 00:16:52.527 "claimed": false, 00:16:52.527 "zoned": false, 00:16:52.527 "supported_io_types": { 00:16:52.527 "read": true, 00:16:52.527 "write": true, 00:16:52.527 "unmap": true, 00:16:52.527 "flush": false, 00:16:52.527 "reset": true, 00:16:52.527 "nvme_admin": false, 00:16:52.527 "nvme_io": false, 00:16:52.527 "nvme_io_md": false, 00:16:52.527 "write_zeroes": true, 00:16:52.527 "zcopy": false, 00:16:52.527 "get_zone_info": false, 00:16:52.527 "zone_management": false, 00:16:52.527 "zone_append": false, 00:16:52.527 "compare": false, 00:16:52.527 "compare_and_write": false, 00:16:52.527 "abort": false, 00:16:52.527 "seek_hole": true, 00:16:52.527 "seek_data": true, 00:16:52.527 "copy": false, 00:16:52.527 "nvme_iov_md": false 00:16:52.527 }, 00:16:52.527 "driver_specific": { 00:16:52.527 "lvol": { 00:16:52.527 "lvol_store_uuid": "0fa46aaf-ca5b-44bf-9a49-b881653154a4", 00:16:52.527 "base_bdev": "nvme0n1", 00:16:52.527 "thin_provision": true, 00:16:52.527 "num_allocated_clusters": 0, 00:16:52.527 "snapshot": false, 00:16:52.527 "clone": false, 00:16:52.527 "esnap_clone": false 00:16:52.527 } 00:16:52.527 } 00:16:52.527 } 00:16:52.527 ]' 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:52.527 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:52.786 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:52.786 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:52.786 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:52.786 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:52.786 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:52.786 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:52.786 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:52.786 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:53.044 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:53.044 { 00:16:53.044 "name": "432d814e-0149-4d70-95f9-f70f6756a6c7", 00:16:53.044 "aliases": [ 00:16:53.044 "lvs/nvme0n1p0" 00:16:53.044 ], 00:16:53.044 "product_name": "Logical Volume", 00:16:53.044 "block_size": 4096, 00:16:53.044 "num_blocks": 26476544, 00:16:53.044 "uuid": "432d814e-0149-4d70-95f9-f70f6756a6c7", 00:16:53.044 "assigned_rate_limits": { 00:16:53.044 "rw_ios_per_sec": 0, 00:16:53.044 "rw_mbytes_per_sec": 0, 00:16:53.044 "r_mbytes_per_sec": 0, 00:16:53.044 "w_mbytes_per_sec": 0 00:16:53.044 }, 00:16:53.044 "claimed": false, 00:16:53.044 "zoned": false, 00:16:53.044 "supported_io_types": { 00:16:53.044 "read": true, 00:16:53.044 "write": true, 00:16:53.044 "unmap": true, 00:16:53.044 "flush": false, 00:16:53.044 "reset": true, 00:16:53.044 "nvme_admin": false, 00:16:53.044 "nvme_io": false, 00:16:53.044 "nvme_io_md": false, 00:16:53.044 "write_zeroes": true, 00:16:53.044 "zcopy": false, 00:16:53.044 "get_zone_info": false, 00:16:53.044 "zone_management": false, 00:16:53.044 "zone_append": false, 00:16:53.044 "compare": false, 00:16:53.044 "compare_and_write": false, 00:16:53.044 "abort": false, 00:16:53.044 "seek_hole": true, 00:16:53.044 "seek_data": true, 00:16:53.044 "copy": false, 00:16:53.044 "nvme_iov_md": false 00:16:53.044 }, 00:16:53.044 "driver_specific": { 00:16:53.044 "lvol": { 00:16:53.044 "lvol_store_uuid": "0fa46aaf-ca5b-44bf-9a49-b881653154a4", 00:16:53.044 "base_bdev": "nvme0n1", 00:16:53.044 "thin_provision": true, 00:16:53.044 "num_allocated_clusters": 0, 00:16:53.044 "snapshot": false, 00:16:53.044 "clone": false, 00:16:53.044 "esnap_clone": false 00:16:53.044 } 00:16:53.044 } 00:16:53.044 } 00:16:53.044 ]' 00:16:53.044 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:53.044 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:53.044 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:53.044 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:53.044 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:53.044 21:48:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:53.044 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:53.044 21:48:11 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:53.303 21:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:53.303 21:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:53.303 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:53.303 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:53.303 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:53.303 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:53.303 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 432d814e-0149-4d70-95f9-f70f6756a6c7 00:16:53.562 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:53.562 { 00:16:53.562 "name": "432d814e-0149-4d70-95f9-f70f6756a6c7", 00:16:53.562 "aliases": [ 00:16:53.562 "lvs/nvme0n1p0" 00:16:53.562 ], 00:16:53.562 "product_name": "Logical Volume", 00:16:53.562 "block_size": 4096, 00:16:53.562 "num_blocks": 26476544, 00:16:53.562 "uuid": "432d814e-0149-4d70-95f9-f70f6756a6c7", 00:16:53.562 "assigned_rate_limits": { 00:16:53.562 "rw_ios_per_sec": 0, 00:16:53.562 "rw_mbytes_per_sec": 0, 00:16:53.562 "r_mbytes_per_sec": 0, 00:16:53.562 "w_mbytes_per_sec": 0 00:16:53.562 }, 00:16:53.562 "claimed": false, 00:16:53.562 "zoned": false, 00:16:53.562 "supported_io_types": { 00:16:53.562 "read": true, 00:16:53.562 "write": true, 00:16:53.562 "unmap": true, 00:16:53.562 "flush": false, 00:16:53.562 "reset": true, 00:16:53.562 "nvme_admin": false, 00:16:53.562 "nvme_io": false, 00:16:53.562 "nvme_io_md": false, 00:16:53.562 "write_zeroes": true, 00:16:53.562 "zcopy": false, 00:16:53.562 "get_zone_info": false, 00:16:53.562 "zone_management": false, 00:16:53.562 "zone_append": false, 00:16:53.562 "compare": false, 00:16:53.562 "compare_and_write": false, 00:16:53.562 "abort": false, 00:16:53.562 "seek_hole": true, 00:16:53.562 "seek_data": true, 00:16:53.562 "copy": false, 00:16:53.562 "nvme_iov_md": false 00:16:53.562 }, 00:16:53.562 "driver_specific": { 00:16:53.562 "lvol": { 00:16:53.562 "lvol_store_uuid": "0fa46aaf-ca5b-44bf-9a49-b881653154a4", 00:16:53.563 "base_bdev": "nvme0n1", 00:16:53.563 "thin_provision": true, 00:16:53.563 "num_allocated_clusters": 0, 00:16:53.563 "snapshot": false, 00:16:53.563 "clone": false, 00:16:53.563 "esnap_clone": false 00:16:53.563 } 00:16:53.563 } 00:16:53.563 } 00:16:53.563 ]' 00:16:53.563 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:53.563 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:53.563 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:53.563 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:53.563 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:53.563 21:48:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:53.563 21:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:53.563 21:48:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 432d814e-0149-4d70-95f9-f70f6756a6c7 -c nvc0n1p0 --l2p_dram_limit 20 00:16:53.823 [2024-09-29 21:48:12.661776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.661843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:53.823 [2024-09-29 21:48:12.661856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:53.823 [2024-09-29 21:48:12.661866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.823 [2024-09-29 21:48:12.661923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.661933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:53.823 [2024-09-29 21:48:12.661939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:53.823 [2024-09-29 21:48:12.661947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.823 [2024-09-29 21:48:12.661962] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:53.823 [2024-09-29 21:48:12.662657] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:53.823 [2024-09-29 21:48:12.662672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.662681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:53.823 [2024-09-29 21:48:12.662688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:16:53.823 [2024-09-29 21:48:12.662696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.823 [2024-09-29 21:48:12.662725] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a811f5a6-8db0-4650-8e94-2d1dd1b45495 00:16:53.823 [2024-09-29 21:48:12.664051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.664078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:53.823 [2024-09-29 21:48:12.664092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:53.823 [2024-09-29 21:48:12.664099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.823 [2024-09-29 21:48:12.670917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.670945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:53.823 [2024-09-29 21:48:12.670955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.761 ms 00:16:53.823 [2024-09-29 21:48:12.670962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.823 [2024-09-29 21:48:12.671035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.671044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:53.823 [2024-09-29 21:48:12.671055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:53.823 [2024-09-29 21:48:12.671061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.823 [2024-09-29 21:48:12.671104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.671112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:53.823 [2024-09-29 21:48:12.671123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:53.823 [2024-09-29 21:48:12.671128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.823 [2024-09-29 21:48:12.671148] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:53.823 [2024-09-29 21:48:12.674429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.674455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:53.823 [2024-09-29 21:48:12.674463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:16:53.823 [2024-09-29 21:48:12.674470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.823 [2024-09-29 21:48:12.674498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.674506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:53.823 [2024-09-29 21:48:12.674513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:53.823 [2024-09-29 21:48:12.674520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.823 [2024-09-29 21:48:12.674540] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:53.823 [2024-09-29 21:48:12.674657] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:53.823 [2024-09-29 21:48:12.674667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:53.823 [2024-09-29 21:48:12.674678] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:53.823 [2024-09-29 21:48:12.674687] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:53.823 [2024-09-29 21:48:12.674696] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:53.823 [2024-09-29 21:48:12.674702] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:53.823 [2024-09-29 21:48:12.674712] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:53.823 [2024-09-29 21:48:12.674717] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:53.823 [2024-09-29 21:48:12.674725] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:53.823 [2024-09-29 21:48:12.674732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.823 [2024-09-29 21:48:12.674740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:53.823 [2024-09-29 21:48:12.674746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:53.823 [2024-09-29 21:48:12.674754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.824 [2024-09-29 21:48:12.674816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.824 [2024-09-29 21:48:12.674825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:53.824 [2024-09-29 21:48:12.674831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:53.824 [2024-09-29 21:48:12.674841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.824 [2024-09-29 21:48:12.674910] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:53.824 [2024-09-29 21:48:12.674919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:53.824 [2024-09-29 21:48:12.674925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:53.824 [2024-09-29 21:48:12.674933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.824 [2024-09-29 21:48:12.674940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:53.824 [2024-09-29 21:48:12.674947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:53.824 [2024-09-29 21:48:12.674952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:53.824 [2024-09-29 21:48:12.674958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:53.824 [2024-09-29 21:48:12.674964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:53.824 [2024-09-29 21:48:12.674971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:53.824 [2024-09-29 21:48:12.674976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:53.824 [2024-09-29 21:48:12.674989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:53.824 [2024-09-29 21:48:12.674994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:53.824 [2024-09-29 21:48:12.675001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:53.824 [2024-09-29 21:48:12.675006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:53.824 [2024-09-29 21:48:12.675016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:53.824 [2024-09-29 21:48:12.675029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:53.824 [2024-09-29 21:48:12.675033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:53.824 [2024-09-29 21:48:12.675045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:53.824 [2024-09-29 21:48:12.675058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:53.824 [2024-09-29 21:48:12.675064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:53.824 [2024-09-29 21:48:12.675076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:53.824 [2024-09-29 21:48:12.675081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:53.824 [2024-09-29 21:48:12.675095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:53.824 [2024-09-29 21:48:12.675102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:53.824 [2024-09-29 21:48:12.675116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:53.824 [2024-09-29 21:48:12.675121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:53.824 [2024-09-29 21:48:12.675133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:53.824 [2024-09-29 21:48:12.675140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:53.824 [2024-09-29 21:48:12.675145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:53.824 [2024-09-29 21:48:12.675151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:53.824 [2024-09-29 21:48:12.675157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:53.824 [2024-09-29 21:48:12.675164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:53.824 [2024-09-29 21:48:12.675175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:53.824 [2024-09-29 21:48:12.675179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675186] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:53.824 [2024-09-29 21:48:12.675193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:53.824 [2024-09-29 21:48:12.675199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:53.824 [2024-09-29 21:48:12.675205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.824 [2024-09-29 21:48:12.675214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:53.824 [2024-09-29 21:48:12.675220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:53.824 [2024-09-29 21:48:12.675226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:53.824 [2024-09-29 21:48:12.675232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:53.824 [2024-09-29 21:48:12.675238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:53.824 [2024-09-29 21:48:12.675243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:53.824 [2024-09-29 21:48:12.675253] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:53.824 [2024-09-29 21:48:12.675262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:53.824 [2024-09-29 21:48:12.675270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:53.824 [2024-09-29 21:48:12.675277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:53.824 [2024-09-29 21:48:12.675284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:53.824 [2024-09-29 21:48:12.675289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:53.824 [2024-09-29 21:48:12.675296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:53.824 [2024-09-29 21:48:12.675302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:53.824 [2024-09-29 21:48:12.675310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:53.824 [2024-09-29 21:48:12.675315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:53.824 [2024-09-29 21:48:12.675324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:53.824 [2024-09-29 21:48:12.675329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:53.824 [2024-09-29 21:48:12.675337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:53.824 [2024-09-29 21:48:12.675343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:53.824 [2024-09-29 21:48:12.675349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:53.824 [2024-09-29 21:48:12.675355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:53.824 [2024-09-29 21:48:12.675361] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:53.824 [2024-09-29 21:48:12.675368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:53.824 [2024-09-29 21:48:12.675375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:53.824 [2024-09-29 21:48:12.675382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:53.824 [2024-09-29 21:48:12.675661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:53.824 [2024-09-29 21:48:12.676003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:53.824 [2024-09-29 21:48:12.676121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.824 [2024-09-29 21:48:12.676140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:53.824 [2024-09-29 21:48:12.676159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:16:53.824 [2024-09-29 21:48:12.676175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.824 [2024-09-29 21:48:12.676246] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:53.824 [2024-09-29 21:48:12.676276] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:56.356 [2024-09-29 21:48:14.735708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.735905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:56.356 [2024-09-29 21:48:14.735967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2059.449 ms 00:16:56.356 [2024-09-29 21:48:14.735988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.775924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.776138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.356 [2024-09-29 21:48:14.776164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.750 ms 00:16:56.356 [2024-09-29 21:48:14.776173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.776335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.776346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.356 [2024-09-29 21:48:14.776362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:56.356 [2024-09-29 21:48:14.776371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.802833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.802984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.356 [2024-09-29 21:48:14.803005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.410 ms 00:16:56.356 [2024-09-29 21:48:14.803012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.803048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.803055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.356 [2024-09-29 21:48:14.803063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.356 [2024-09-29 21:48:14.803069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.803504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.803518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.356 [2024-09-29 21:48:14.803527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:16:56.356 [2024-09-29 21:48:14.803534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.803632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.803642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.356 [2024-09-29 21:48:14.803652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:56.356 [2024-09-29 21:48:14.803658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.814831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.814858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.356 [2024-09-29 21:48:14.814869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.158 ms 00:16:56.356 [2024-09-29 21:48:14.814875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.824892] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:56.356 [2024-09-29 21:48:14.830263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.830428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:56.356 [2024-09-29 21:48:14.830443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.310 ms 00:16:56.356 [2024-09-29 21:48:14.830452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.885001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.885041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:56.356 [2024-09-29 21:48:14.885054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.525 ms 00:16:56.356 [2024-09-29 21:48:14.885062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.885213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.885226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:56.356 [2024-09-29 21:48:14.885234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:16:56.356 [2024-09-29 21:48:14.885241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.903095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.903127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:56.356 [2024-09-29 21:48:14.903137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.821 ms 00:16:56.356 [2024-09-29 21:48:14.903148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.920148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.920178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:56.356 [2024-09-29 21:48:14.920187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.973 ms 00:16:56.356 [2024-09-29 21:48:14.920195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.920669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.920689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:56.356 [2024-09-29 21:48:14.920698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:16:56.356 [2024-09-29 21:48:14.920706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.978219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.978259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:56.356 [2024-09-29 21:48:14.978268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.473 ms 00:16:56.356 [2024-09-29 21:48:14.978277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:14.997416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:14.997451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:56.356 [2024-09-29 21:48:14.997460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.085 ms 00:16:56.356 [2024-09-29 21:48:14.997469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:15.015280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:15.015313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:56.356 [2024-09-29 21:48:15.015322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.784 ms 00:16:56.356 [2024-09-29 21:48:15.015330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.356 [2024-09-29 21:48:15.033222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.356 [2024-09-29 21:48:15.033253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:56.357 [2024-09-29 21:48:15.033262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.865 ms 00:16:56.357 [2024-09-29 21:48:15.033270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.357 [2024-09-29 21:48:15.033300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.357 [2024-09-29 21:48:15.033312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:56.357 [2024-09-29 21:48:15.033319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:56.357 [2024-09-29 21:48:15.033327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.357 [2024-09-29 21:48:15.033408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.357 [2024-09-29 21:48:15.033421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:56.357 [2024-09-29 21:48:15.033429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:56.357 [2024-09-29 21:48:15.033437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.357 [2024-09-29 21:48:15.034406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2372.221 ms, result 0 00:16:56.357 { 00:16:56.357 "name": "ftl0", 00:16:56.357 "uuid": "a811f5a6-8db0-4650-8e94-2d1dd1b45495" 00:16:56.357 } 00:16:56.357 21:48:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:56.357 21:48:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:56.357 21:48:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:56.357 21:48:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:56.615 [2024-09-29 21:48:15.354469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:56.615 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:56.615 Zero copy mechanism will not be used. 00:16:56.615 Running I/O for 4 seconds... 00:17:00.809 3253.00 IOPS, 216.02 MiB/s 3205.50 IOPS, 212.87 MiB/s 3152.00 IOPS, 209.31 MiB/s 3120.25 IOPS, 207.20 MiB/s 00:17:00.809 Latency(us) 00:17:00.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.809 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:00.809 ftl0 : 4.00 3119.06 207.13 0.00 0.00 337.46 158.33 2407.19 00:17:00.809 =================================================================================================================== 00:17:00.809 Total : 3119.06 207.13 0.00 0.00 337.46 158.33 2407.19 00:17:00.809 [2024-09-29 21:48:19.364125] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:00.809 { 00:17:00.809 "results": [ 00:17:00.809 { 00:17:00.809 "job": "ftl0", 00:17:00.809 "core_mask": "0x1", 00:17:00.809 "workload": "randwrite", 00:17:00.809 "status": "finished", 00:17:00.809 "queue_depth": 1, 00:17:00.809 "io_size": 69632, 00:17:00.809 "runtime": 4.001844, 00:17:00.809 "iops": 3119.062112366199, 00:17:00.809 "mibps": 207.1252183993179, 00:17:00.809 "io_failed": 0, 00:17:00.809 "io_timeout": 0, 00:17:00.809 "avg_latency_us": 337.46001848816144, 00:17:00.809 "min_latency_us": 158.32615384615386, 00:17:00.809 "max_latency_us": 2407.1876923076925 00:17:00.809 } 00:17:00.809 ], 00:17:00.809 "core_count": 1 00:17:00.809 } 00:17:00.809 21:48:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:00.809 [2024-09-29 21:48:19.464036] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:00.809 Running I/O for 4 seconds... 00:17:04.596 11215.00 IOPS, 43.81 MiB/s 10687.00 IOPS, 41.75 MiB/s 10707.00 IOPS, 41.82 MiB/s 10635.50 IOPS, 41.54 MiB/s 00:17:04.596 Latency(us) 00:17:04.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.596 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.596 ftl0 : 4.02 10627.05 41.51 0.00 0.00 12020.85 277.27 28029.24 00:17:04.596 =================================================================================================================== 00:17:04.596 Total : 10627.05 41.51 0.00 0.00 12020.85 0.00 28029.24 00:17:04.596 [2024-09-29 21:48:23.488516] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:04.596 { 00:17:04.596 "results": [ 00:17:04.596 { 00:17:04.596 "job": "ftl0", 00:17:04.596 "core_mask": "0x1", 00:17:04.596 "workload": "randwrite", 00:17:04.596 "status": "finished", 00:17:04.596 "queue_depth": 128, 00:17:04.596 "io_size": 4096, 00:17:04.596 "runtime": 4.015225, 00:17:04.596 "iops": 10627.05078793841, 00:17:04.596 "mibps": 41.51191714038441, 00:17:04.596 "io_failed": 0, 00:17:04.596 "io_timeout": 0, 00:17:04.596 "avg_latency_us": 12020.85165250311, 00:17:04.596 "min_latency_us": 277.2676923076923, 00:17:04.596 "max_latency_us": 28029.243076923078 00:17:04.596 } 00:17:04.596 ], 00:17:04.596 "core_count": 1 00:17:04.596 } 00:17:04.596 21:48:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:04.854 [2024-09-29 21:48:23.607322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:04.854 Running I/O for 4 seconds... 00:17:09.031 8516.00 IOPS, 33.27 MiB/s 8483.00 IOPS, 33.14 MiB/s 8499.67 IOPS, 33.20 MiB/s 8633.75 IOPS, 33.73 MiB/s 00:17:09.031 Latency(us) 00:17:09.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.031 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:09.031 Verification LBA range: start 0x0 length 0x1400000 00:17:09.031 ftl0 : 4.01 8645.69 33.77 0.00 0.00 14756.91 217.40 26819.35 00:17:09.031 =================================================================================================================== 00:17:09.031 Total : 8645.69 33.77 0.00 0.00 14756.91 0.00 26819.35 00:17:09.031 [2024-09-29 21:48:27.632401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:09.031 { 00:17:09.031 "results": [ 00:17:09.031 { 00:17:09.031 "job": "ftl0", 00:17:09.031 "core_mask": "0x1", 00:17:09.031 "workload": "verify", 00:17:09.031 "status": "finished", 00:17:09.031 "verify_range": { 00:17:09.031 "start": 0, 00:17:09.031 "length": 20971520 00:17:09.031 }, 00:17:09.031 "queue_depth": 128, 00:17:09.031 "io_size": 4096, 00:17:09.031 "runtime": 4.009167, 00:17:09.031 "iops": 8645.686248539909, 00:17:09.031 "mibps": 33.77221190835902, 00:17:09.031 "io_failed": 0, 00:17:09.031 "io_timeout": 0, 00:17:09.031 "avg_latency_us": 14756.91336768707, 00:17:09.031 "min_latency_us": 217.40307692307692, 00:17:09.031 "max_latency_us": 26819.347692307692 00:17:09.031 } 00:17:09.031 ], 00:17:09.031 "core_count": 1 00:17:09.031 } 00:17:09.031 21:48:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:09.031 [2024-09-29 21:48:27.835073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.031 [2024-09-29 21:48:27.835267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:09.031 [2024-09-29 21:48:27.835325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:09.031 [2024-09-29 21:48:27.835347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.031 [2024-09-29 21:48:27.835380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.031 [2024-09-29 21:48:27.837647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.031 [2024-09-29 21:48:27.837748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:09.031 [2024-09-29 21:48:27.837812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.178 ms 00:17:09.031 [2024-09-29 21:48:27.837830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.031 [2024-09-29 21:48:27.839662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.031 [2024-09-29 21:48:27.839757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:09.031 [2024-09-29 21:48:27.839808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.796 ms 00:17:09.031 [2024-09-29 21:48:27.839826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.031 [2024-09-29 21:48:27.966297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.031 [2024-09-29 21:48:27.966463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:09.031 [2024-09-29 21:48:27.966523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 126.439 ms 00:17:09.031 [2024-09-29 21:48:27.966543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.031 [2024-09-29 21:48:27.971209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.031 [2024-09-29 21:48:27.971312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:09.031 [2024-09-29 21:48:27.971372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.628 ms 00:17:09.032 [2024-09-29 21:48:27.971404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.032 [2024-09-29 21:48:27.990029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.032 [2024-09-29 21:48:27.990143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:09.032 [2024-09-29 21:48:27.990190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.573 ms 00:17:09.032 [2024-09-29 21:48:27.990208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.032 [2024-09-29 21:48:28.002960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.032 [2024-09-29 21:48:28.003060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:09.032 [2024-09-29 21:48:28.003104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.716 ms 00:17:09.032 [2024-09-29 21:48:28.003123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.032 [2024-09-29 21:48:28.003234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.032 [2024-09-29 21:48:28.003255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:09.032 [2024-09-29 21:48:28.003276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:09.032 [2024-09-29 21:48:28.003292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.291 [2024-09-29 21:48:28.021473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.291 [2024-09-29 21:48:28.021568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:09.291 [2024-09-29 21:48:28.021626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.147 ms 00:17:09.291 [2024-09-29 21:48:28.021644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.291 [2024-09-29 21:48:28.039109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.291 [2024-09-29 21:48:28.039198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:09.291 [2024-09-29 21:48:28.039239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.430 ms 00:17:09.291 [2024-09-29 21:48:28.039257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.291 [2024-09-29 21:48:28.056132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.291 [2024-09-29 21:48:28.056225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:09.291 [2024-09-29 21:48:28.056240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.830 ms 00:17:09.291 [2024-09-29 21:48:28.056246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.291 [2024-09-29 21:48:28.073769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.291 [2024-09-29 21:48:28.073884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:09.291 [2024-09-29 21:48:28.073901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.464 ms 00:17:09.291 [2024-09-29 21:48:28.073908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.291 [2024-09-29 21:48:28.073934] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:09.291 [2024-09-29 21:48:28.073950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.073960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.073967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.073975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.073981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.073989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.073995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:09.291 [2024-09-29 21:48:28.074249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:09.292 [2024-09-29 21:48:28.074716] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:09.292 [2024-09-29 21:48:28.074724] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a811f5a6-8db0-4650-8e94-2d1dd1b45495 00:17:09.292 [2024-09-29 21:48:28.074730] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:09.292 [2024-09-29 21:48:28.074738] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:09.292 [2024-09-29 21:48:28.074743] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:09.292 [2024-09-29 21:48:28.074750] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:09.292 [2024-09-29 21:48:28.074756] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:09.292 [2024-09-29 21:48:28.074764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:09.292 [2024-09-29 21:48:28.074769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:09.292 [2024-09-29 21:48:28.074777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:09.292 [2024-09-29 21:48:28.074781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:09.292 [2024-09-29 21:48:28.074788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.292 [2024-09-29 21:48:28.074795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:09.292 [2024-09-29 21:48:28.074805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:17:09.292 [2024-09-29 21:48:28.074810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.292 [2024-09-29 21:48:28.085178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.292 [2024-09-29 21:48:28.085279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:09.292 [2024-09-29 21:48:28.085294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.340 ms 00:17:09.292 [2024-09-29 21:48:28.085301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.292 [2024-09-29 21:48:28.085610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.292 [2024-09-29 21:48:28.085619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:09.292 [2024-09-29 21:48:28.085628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:09.292 [2024-09-29 21:48:28.085633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.292 [2024-09-29 21:48:28.110757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.292 [2024-09-29 21:48:28.110788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.292 [2024-09-29 21:48:28.110801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.292 [2024-09-29 21:48:28.110808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.292 [2024-09-29 21:48:28.110863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.292 [2024-09-29 21:48:28.110872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.292 [2024-09-29 21:48:28.110880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.292 [2024-09-29 21:48:28.110886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.292 [2024-09-29 21:48:28.110944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.292 [2024-09-29 21:48:28.110953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.292 [2024-09-29 21:48:28.110961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.292 [2024-09-29 21:48:28.110967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.292 [2024-09-29 21:48:28.110982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.292 [2024-09-29 21:48:28.110988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.292 [2024-09-29 21:48:28.110998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.292 [2024-09-29 21:48:28.111004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.292 [2024-09-29 21:48:28.174051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.292 [2024-09-29 21:48:28.174099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.292 [2024-09-29 21:48:28.174113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.293 [2024-09-29 21:48:28.174132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.293 [2024-09-29 21:48:28.225252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.293 [2024-09-29 21:48:28.225305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.293 [2024-09-29 21:48:28.225320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.293 [2024-09-29 21:48:28.225326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.293 [2024-09-29 21:48:28.225499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.293 [2024-09-29 21:48:28.225509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:09.293 [2024-09-29 21:48:28.225518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.293 [2024-09-29 21:48:28.225524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.293 [2024-09-29 21:48:28.225560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.293 [2024-09-29 21:48:28.225568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:09.293 [2024-09-29 21:48:28.225576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.293 [2024-09-29 21:48:28.225585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.293 [2024-09-29 21:48:28.225667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.293 [2024-09-29 21:48:28.225676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:09.293 [2024-09-29 21:48:28.225686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.293 [2024-09-29 21:48:28.225692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.293 [2024-09-29 21:48:28.225717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.293 [2024-09-29 21:48:28.225724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:09.293 [2024-09-29 21:48:28.225732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.293 [2024-09-29 21:48:28.225738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.293 [2024-09-29 21:48:28.225773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.293 [2024-09-29 21:48:28.225780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:09.293 [2024-09-29 21:48:28.225788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.293 [2024-09-29 21:48:28.225797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.293 [2024-09-29 21:48:28.225838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:09.293 [2024-09-29 21:48:28.225845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:09.293 [2024-09-29 21:48:28.225853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:09.293 [2024-09-29 21:48:28.225862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.293 [2024-09-29 21:48:28.225977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 390.864 ms, result 0 00:17:09.293 true 00:17:09.293 21:48:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73699 00:17:09.293 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73699 ']' 00:17:09.293 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73699 00:17:09.293 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:17:09.293 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.293 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73699 00:17:09.551 killing process with pid 73699 00:17:09.551 Received shutdown signal, test time was about 4.000000 seconds 00:17:09.551 00:17:09.551 Latency(us) 00:17:09.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.551 =================================================================================================================== 00:17:09.551 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.551 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.551 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.551 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73699' 00:17:09.551 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73699 00:17:09.551 21:48:28 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73699 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:14.816 Remove shared memory files 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:14.816 ************************************ 00:17:14.816 END TEST ftl_bdevperf 00:17:14.816 ************************************ 00:17:14.816 00:17:14.816 real 0m24.852s 00:17:14.816 user 0m27.516s 00:17:14.816 sys 0m0.945s 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:14.816 21:48:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:14.816 21:48:33 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:14.816 21:48:33 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:14.816 21:48:33 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:14.816 21:48:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:14.816 ************************************ 00:17:14.816 START TEST ftl_trim 00:17:14.816 ************************************ 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:14.816 * Looking for test storage... 00:17:14.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.816 21:48:33 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:14.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.816 --rc genhtml_branch_coverage=1 00:17:14.816 --rc genhtml_function_coverage=1 00:17:14.816 --rc genhtml_legend=1 00:17:14.816 --rc geninfo_all_blocks=1 00:17:14.816 --rc geninfo_unexecuted_blocks=1 00:17:14.816 00:17:14.816 ' 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:14.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.816 --rc genhtml_branch_coverage=1 00:17:14.816 --rc genhtml_function_coverage=1 00:17:14.816 --rc genhtml_legend=1 00:17:14.816 --rc geninfo_all_blocks=1 00:17:14.816 --rc geninfo_unexecuted_blocks=1 00:17:14.816 00:17:14.816 ' 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:14.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.816 --rc genhtml_branch_coverage=1 00:17:14.816 --rc genhtml_function_coverage=1 00:17:14.816 --rc genhtml_legend=1 00:17:14.816 --rc geninfo_all_blocks=1 00:17:14.816 --rc geninfo_unexecuted_blocks=1 00:17:14.816 00:17:14.816 ' 00:17:14.816 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:14.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.816 --rc genhtml_branch_coverage=1 00:17:14.816 --rc genhtml_function_coverage=1 00:17:14.816 --rc genhtml_legend=1 00:17:14.816 --rc geninfo_all_blocks=1 00:17:14.816 --rc geninfo_unexecuted_blocks=1 00:17:14.816 00:17:14.816 ' 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:14.816 21:48:33 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=74039 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 74039 00:17:14.817 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74039 ']' 00:17:14.817 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.817 21:48:33 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:14.817 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.817 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.817 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.817 21:48:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:15.075 [2024-09-29 21:48:33.815791] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:15.075 [2024-09-29 21:48:33.816113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74039 ] 00:17:15.075 [2024-09-29 21:48:33.967818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:15.332 [2024-09-29 21:48:34.189032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.332 [2024-09-29 21:48:34.189243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.332 [2024-09-29 21:48:34.189310] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.899 21:48:34 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.899 21:48:34 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:15.899 21:48:34 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:15.899 21:48:34 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:15.899 21:48:34 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:15.899 21:48:34 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:15.899 21:48:34 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:15.899 21:48:34 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:16.157 21:48:35 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:16.157 21:48:35 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:16.157 21:48:35 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:16.157 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:16.157 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:16.157 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:16.157 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:16.157 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:16.415 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:16.415 { 00:17:16.415 "name": "nvme0n1", 00:17:16.415 "aliases": [ 00:17:16.415 "f8a993c8-d3ae-4505-af28-2d4c81c988b3" 00:17:16.415 ], 00:17:16.415 "product_name": "NVMe disk", 00:17:16.415 "block_size": 4096, 00:17:16.415 "num_blocks": 1310720, 00:17:16.415 "uuid": "f8a993c8-d3ae-4505-af28-2d4c81c988b3", 00:17:16.415 "numa_id": -1, 00:17:16.415 "assigned_rate_limits": { 00:17:16.415 "rw_ios_per_sec": 0, 00:17:16.415 "rw_mbytes_per_sec": 0, 00:17:16.415 "r_mbytes_per_sec": 0, 00:17:16.415 "w_mbytes_per_sec": 0 00:17:16.415 }, 00:17:16.415 "claimed": true, 00:17:16.415 "claim_type": "read_many_write_one", 00:17:16.415 "zoned": false, 00:17:16.415 "supported_io_types": { 00:17:16.415 "read": true, 00:17:16.415 "write": true, 00:17:16.415 "unmap": true, 00:17:16.415 "flush": true, 00:17:16.415 "reset": true, 00:17:16.415 "nvme_admin": true, 00:17:16.415 "nvme_io": true, 00:17:16.415 "nvme_io_md": false, 00:17:16.415 "write_zeroes": true, 00:17:16.415 "zcopy": false, 00:17:16.415 "get_zone_info": false, 00:17:16.415 "zone_management": false, 00:17:16.415 "zone_append": false, 00:17:16.415 "compare": true, 00:17:16.415 "compare_and_write": false, 00:17:16.415 "abort": true, 00:17:16.415 "seek_hole": false, 00:17:16.415 "seek_data": false, 00:17:16.415 "copy": true, 00:17:16.415 "nvme_iov_md": false 00:17:16.415 }, 00:17:16.415 "driver_specific": { 00:17:16.415 "nvme": [ 00:17:16.415 { 00:17:16.415 "pci_address": "0000:00:11.0", 00:17:16.415 "trid": { 00:17:16.415 "trtype": "PCIe", 00:17:16.415 "traddr": "0000:00:11.0" 00:17:16.415 }, 00:17:16.415 "ctrlr_data": { 00:17:16.415 "cntlid": 0, 00:17:16.415 "vendor_id": "0x1b36", 00:17:16.415 "model_number": "QEMU NVMe Ctrl", 00:17:16.415 "serial_number": "12341", 00:17:16.415 "firmware_revision": "8.0.0", 00:17:16.415 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:16.415 "oacs": { 00:17:16.415 "security": 0, 00:17:16.415 "format": 1, 00:17:16.415 "firmware": 0, 00:17:16.415 "ns_manage": 1 00:17:16.415 }, 00:17:16.415 "multi_ctrlr": false, 00:17:16.415 "ana_reporting": false 00:17:16.415 }, 00:17:16.415 "vs": { 00:17:16.415 "nvme_version": "1.4" 00:17:16.415 }, 00:17:16.415 "ns_data": { 00:17:16.415 "id": 1, 00:17:16.415 "can_share": false 00:17:16.415 } 00:17:16.415 } 00:17:16.415 ], 00:17:16.415 "mp_policy": "active_passive" 00:17:16.415 } 00:17:16.415 } 00:17:16.415 ]' 00:17:16.415 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:16.415 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:16.415 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:16.415 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:16.415 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:16.415 21:48:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:16.415 21:48:35 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:16.415 21:48:35 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:16.415 21:48:35 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:16.415 21:48:35 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:16.415 21:48:35 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:16.673 21:48:35 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=0fa46aaf-ca5b-44bf-9a49-b881653154a4 00:17:16.673 21:48:35 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:16.673 21:48:35 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0fa46aaf-ca5b-44bf-9a49-b881653154a4 00:17:16.931 21:48:35 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:17.190 21:48:35 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=07046a8a-96ee-477f-b0ab-146d3fa3cdde 00:17:17.190 21:48:35 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 07046a8a-96ee-477f-b0ab-146d3fa3cdde 00:17:17.457 21:48:36 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=6263d610-54d3-4633-a9f9-0061a52a717b 00:17:17.457 21:48:36 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6263d610-54d3-4633-a9f9-0061a52a717b 00:17:17.457 21:48:36 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:17.457 21:48:36 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:17.457 21:48:36 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=6263d610-54d3-4633-a9f9-0061a52a717b 00:17:17.457 21:48:36 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:17.457 21:48:36 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 6263d610-54d3-4633-a9f9-0061a52a717b 00:17:17.457 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=6263d610-54d3-4633-a9f9-0061a52a717b 00:17:17.457 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:17.457 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:17.457 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:17.457 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6263d610-54d3-4633-a9f9-0061a52a717b 00:17:17.457 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:17.457 { 00:17:17.457 "name": "6263d610-54d3-4633-a9f9-0061a52a717b", 00:17:17.457 "aliases": [ 00:17:17.457 "lvs/nvme0n1p0" 00:17:17.457 ], 00:17:17.457 "product_name": "Logical Volume", 00:17:17.457 "block_size": 4096, 00:17:17.457 "num_blocks": 26476544, 00:17:17.457 "uuid": "6263d610-54d3-4633-a9f9-0061a52a717b", 00:17:17.457 "assigned_rate_limits": { 00:17:17.457 "rw_ios_per_sec": 0, 00:17:17.457 "rw_mbytes_per_sec": 0, 00:17:17.457 "r_mbytes_per_sec": 0, 00:17:17.457 "w_mbytes_per_sec": 0 00:17:17.457 }, 00:17:17.457 "claimed": false, 00:17:17.457 "zoned": false, 00:17:17.457 "supported_io_types": { 00:17:17.457 "read": true, 00:17:17.457 "write": true, 00:17:17.457 "unmap": true, 00:17:17.457 "flush": false, 00:17:17.457 "reset": true, 00:17:17.457 "nvme_admin": false, 00:17:17.457 "nvme_io": false, 00:17:17.457 "nvme_io_md": false, 00:17:17.457 "write_zeroes": true, 00:17:17.457 "zcopy": false, 00:17:17.457 "get_zone_info": false, 00:17:17.457 "zone_management": false, 00:17:17.457 "zone_append": false, 00:17:17.457 "compare": false, 00:17:17.457 "compare_and_write": false, 00:17:17.457 "abort": false, 00:17:17.457 "seek_hole": true, 00:17:17.457 "seek_data": true, 00:17:17.457 "copy": false, 00:17:17.457 "nvme_iov_md": false 00:17:17.457 }, 00:17:17.457 "driver_specific": { 00:17:17.457 "lvol": { 00:17:17.457 "lvol_store_uuid": "07046a8a-96ee-477f-b0ab-146d3fa3cdde", 00:17:17.457 "base_bdev": "nvme0n1", 00:17:17.457 "thin_provision": true, 00:17:17.457 "num_allocated_clusters": 0, 00:17:17.457 "snapshot": false, 00:17:17.457 "clone": false, 00:17:17.457 "esnap_clone": false 00:17:17.457 } 00:17:17.457 } 00:17:17.457 } 00:17:17.457 ]' 00:17:17.457 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:17.457 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:17.457 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:17.715 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:17.715 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:17.715 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:17.715 21:48:36 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:17.715 21:48:36 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:17.715 21:48:36 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:17.972 21:48:36 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:17.972 21:48:36 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:17.972 21:48:36 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 6263d610-54d3-4633-a9f9-0061a52a717b 00:17:17.972 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=6263d610-54d3-4633-a9f9-0061a52a717b 00:17:17.972 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:17.972 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:17.972 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:17.972 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6263d610-54d3-4633-a9f9-0061a52a717b 00:17:17.972 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:17.972 { 00:17:17.972 "name": "6263d610-54d3-4633-a9f9-0061a52a717b", 00:17:17.972 "aliases": [ 00:17:17.972 "lvs/nvme0n1p0" 00:17:17.972 ], 00:17:17.972 "product_name": "Logical Volume", 00:17:17.972 "block_size": 4096, 00:17:17.972 "num_blocks": 26476544, 00:17:17.972 "uuid": "6263d610-54d3-4633-a9f9-0061a52a717b", 00:17:17.972 "assigned_rate_limits": { 00:17:17.972 "rw_ios_per_sec": 0, 00:17:17.972 "rw_mbytes_per_sec": 0, 00:17:17.972 "r_mbytes_per_sec": 0, 00:17:17.972 "w_mbytes_per_sec": 0 00:17:17.972 }, 00:17:17.972 "claimed": false, 00:17:17.972 "zoned": false, 00:17:17.972 "supported_io_types": { 00:17:17.972 "read": true, 00:17:17.972 "write": true, 00:17:17.972 "unmap": true, 00:17:17.972 "flush": false, 00:17:17.972 "reset": true, 00:17:17.972 "nvme_admin": false, 00:17:17.972 "nvme_io": false, 00:17:17.972 "nvme_io_md": false, 00:17:17.972 "write_zeroes": true, 00:17:17.972 "zcopy": false, 00:17:17.972 "get_zone_info": false, 00:17:17.972 "zone_management": false, 00:17:17.972 "zone_append": false, 00:17:17.972 "compare": false, 00:17:17.972 "compare_and_write": false, 00:17:17.972 "abort": false, 00:17:17.972 "seek_hole": true, 00:17:17.972 "seek_data": true, 00:17:17.972 "copy": false, 00:17:17.972 "nvme_iov_md": false 00:17:17.972 }, 00:17:17.972 "driver_specific": { 00:17:17.972 "lvol": { 00:17:17.972 "lvol_store_uuid": "07046a8a-96ee-477f-b0ab-146d3fa3cdde", 00:17:17.972 "base_bdev": "nvme0n1", 00:17:17.972 "thin_provision": true, 00:17:17.972 "num_allocated_clusters": 0, 00:17:17.972 "snapshot": false, 00:17:17.972 "clone": false, 00:17:17.972 "esnap_clone": false 00:17:17.972 } 00:17:17.972 } 00:17:17.972 } 00:17:17.972 ]' 00:17:17.972 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:17.972 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:17.972 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:18.230 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:18.230 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:18.230 21:48:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:18.230 21:48:36 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:18.230 21:48:36 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:18.230 21:48:37 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:18.230 21:48:37 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:18.230 21:48:37 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 6263d610-54d3-4633-a9f9-0061a52a717b 00:17:18.230 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=6263d610-54d3-4633-a9f9-0061a52a717b 00:17:18.230 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:18.230 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:18.230 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:18.230 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6263d610-54d3-4633-a9f9-0061a52a717b 00:17:18.489 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:18.489 { 00:17:18.489 "name": "6263d610-54d3-4633-a9f9-0061a52a717b", 00:17:18.489 "aliases": [ 00:17:18.489 "lvs/nvme0n1p0" 00:17:18.489 ], 00:17:18.489 "product_name": "Logical Volume", 00:17:18.489 "block_size": 4096, 00:17:18.489 "num_blocks": 26476544, 00:17:18.489 "uuid": "6263d610-54d3-4633-a9f9-0061a52a717b", 00:17:18.489 "assigned_rate_limits": { 00:17:18.489 "rw_ios_per_sec": 0, 00:17:18.489 "rw_mbytes_per_sec": 0, 00:17:18.489 "r_mbytes_per_sec": 0, 00:17:18.489 "w_mbytes_per_sec": 0 00:17:18.489 }, 00:17:18.489 "claimed": false, 00:17:18.489 "zoned": false, 00:17:18.489 "supported_io_types": { 00:17:18.489 "read": true, 00:17:18.489 "write": true, 00:17:18.489 "unmap": true, 00:17:18.489 "flush": false, 00:17:18.489 "reset": true, 00:17:18.489 "nvme_admin": false, 00:17:18.489 "nvme_io": false, 00:17:18.489 "nvme_io_md": false, 00:17:18.489 "write_zeroes": true, 00:17:18.489 "zcopy": false, 00:17:18.489 "get_zone_info": false, 00:17:18.489 "zone_management": false, 00:17:18.489 "zone_append": false, 00:17:18.489 "compare": false, 00:17:18.489 "compare_and_write": false, 00:17:18.489 "abort": false, 00:17:18.489 "seek_hole": true, 00:17:18.489 "seek_data": true, 00:17:18.489 "copy": false, 00:17:18.489 "nvme_iov_md": false 00:17:18.489 }, 00:17:18.489 "driver_specific": { 00:17:18.489 "lvol": { 00:17:18.489 "lvol_store_uuid": "07046a8a-96ee-477f-b0ab-146d3fa3cdde", 00:17:18.489 "base_bdev": "nvme0n1", 00:17:18.489 "thin_provision": true, 00:17:18.489 "num_allocated_clusters": 0, 00:17:18.489 "snapshot": false, 00:17:18.489 "clone": false, 00:17:18.489 "esnap_clone": false 00:17:18.489 } 00:17:18.489 } 00:17:18.489 } 00:17:18.489 ]' 00:17:18.489 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:18.489 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:18.489 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:18.489 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:18.489 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:18.489 21:48:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:18.489 21:48:37 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:18.489 21:48:37 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6263d610-54d3-4633-a9f9-0061a52a717b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:18.748 [2024-09-29 21:48:37.646833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.646890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:18.748 [2024-09-29 21:48:37.646905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:18.748 [2024-09-29 21:48:37.646914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.649350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.649379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.748 [2024-09-29 21:48:37.649401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:17:18.748 [2024-09-29 21:48:37.649408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.649543] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:18.748 [2024-09-29 21:48:37.650132] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:18.748 [2024-09-29 21:48:37.650152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.650159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.748 [2024-09-29 21:48:37.650168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:17:18.748 [2024-09-29 21:48:37.650176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.650257] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1605127c-fec2-4388-9e4a-c6c135288dc4 00:17:18.748 [2024-09-29 21:48:37.651764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.651853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:18.748 [2024-09-29 21:48:37.651927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:18.748 [2024-09-29 21:48:37.651949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.659034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.659140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.748 [2024-09-29 21:48:37.659198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.013 ms 00:17:18.748 [2024-09-29 21:48:37.659220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.659336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.659366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.748 [2024-09-29 21:48:37.659383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:18.748 [2024-09-29 21:48:37.659471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.659534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.659560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:18.748 [2024-09-29 21:48:37.659694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:18.748 [2024-09-29 21:48:37.659716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.659752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:18.748 [2024-09-29 21:48:37.663075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.663166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.748 [2024-09-29 21:48:37.663226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:17:18.748 [2024-09-29 21:48:37.663249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.663308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.663432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:18.748 [2024-09-29 21:48:37.663456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:18.748 [2024-09-29 21:48:37.663478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.663512] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:18.748 [2024-09-29 21:48:37.663638] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:18.748 [2024-09-29 21:48:37.663677] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:18.748 [2024-09-29 21:48:37.663718] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:18.748 [2024-09-29 21:48:37.663787] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:18.748 [2024-09-29 21:48:37.663817] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:18.748 [2024-09-29 21:48:37.663847] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:18.748 [2024-09-29 21:48:37.663887] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:18.748 [2024-09-29 21:48:37.663908] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:18.748 [2024-09-29 21:48:37.663924] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:18.748 [2024-09-29 21:48:37.663942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.664061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:18.748 [2024-09-29 21:48:37.664081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:17:18.748 [2024-09-29 21:48:37.664096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.664185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.748 [2024-09-29 21:48:37.664236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:18.748 [2024-09-29 21:48:37.664256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:18.748 [2024-09-29 21:48:37.664270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.748 [2024-09-29 21:48:37.664381] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:18.748 [2024-09-29 21:48:37.664448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:18.748 [2024-09-29 21:48:37.664467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.748 [2024-09-29 21:48:37.664512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.748 [2024-09-29 21:48:37.664531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:18.748 [2024-09-29 21:48:37.664545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:18.748 [2024-09-29 21:48:37.664561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:18.748 [2024-09-29 21:48:37.664575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:18.748 [2024-09-29 21:48:37.664591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:18.748 [2024-09-29 21:48:37.664605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.748 [2024-09-29 21:48:37.664621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:18.748 [2024-09-29 21:48:37.664635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:18.748 [2024-09-29 21:48:37.664650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:18.748 [2024-09-29 21:48:37.664664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:18.748 [2024-09-29 21:48:37.664723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:18.748 [2024-09-29 21:48:37.664741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.748 [2024-09-29 21:48:37.664758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:18.748 [2024-09-29 21:48:37.664772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:18.748 [2024-09-29 21:48:37.664788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.748 [2024-09-29 21:48:37.664802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:18.749 [2024-09-29 21:48:37.664819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:18.749 [2024-09-29 21:48:37.664833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.749 [2024-09-29 21:48:37.664848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:18.749 [2024-09-29 21:48:37.664862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:18.749 [2024-09-29 21:48:37.664916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.749 [2024-09-29 21:48:37.664935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:18.749 [2024-09-29 21:48:37.664951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:18.749 [2024-09-29 21:48:37.664965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.749 [2024-09-29 21:48:37.664980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:18.749 [2024-09-29 21:48:37.664995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:18.749 [2024-09-29 21:48:37.665010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:18.749 [2024-09-29 21:48:37.665024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:18.749 [2024-09-29 21:48:37.665041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:18.749 [2024-09-29 21:48:37.665055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.749 [2024-09-29 21:48:37.665104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:18.749 [2024-09-29 21:48:37.665122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:18.749 [2024-09-29 21:48:37.665137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:18.749 [2024-09-29 21:48:37.665151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:18.749 [2024-09-29 21:48:37.665167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:18.749 [2024-09-29 21:48:37.665181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.749 [2024-09-29 21:48:37.665197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:18.749 [2024-09-29 21:48:37.665211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:18.749 [2024-09-29 21:48:37.665227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.749 [2024-09-29 21:48:37.665240] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:18.749 [2024-09-29 21:48:37.665290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:18.749 [2024-09-29 21:48:37.665310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:18.749 [2024-09-29 21:48:37.665328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:18.749 [2024-09-29 21:48:37.665342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:18.749 [2024-09-29 21:48:37.665360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:18.749 [2024-09-29 21:48:37.665374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:18.749 [2024-09-29 21:48:37.665402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:18.749 [2024-09-29 21:48:37.665418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:18.749 [2024-09-29 21:48:37.665440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:18.749 [2024-09-29 21:48:37.665492] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:18.749 [2024-09-29 21:48:37.665522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.749 [2024-09-29 21:48:37.665546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:18.749 [2024-09-29 21:48:37.665571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:18.749 [2024-09-29 21:48:37.665594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:18.749 [2024-09-29 21:48:37.665618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:18.749 [2024-09-29 21:48:37.665671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:18.749 [2024-09-29 21:48:37.665697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:18.749 [2024-09-29 21:48:37.665713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:18.749 [2024-09-29 21:48:37.665721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:18.749 [2024-09-29 21:48:37.665727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:18.749 [2024-09-29 21:48:37.665737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:18.749 [2024-09-29 21:48:37.665742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:18.749 [2024-09-29 21:48:37.665751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:18.749 [2024-09-29 21:48:37.665757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:18.749 [2024-09-29 21:48:37.665764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:18.749 [2024-09-29 21:48:37.665770] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:18.749 [2024-09-29 21:48:37.665779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:18.749 [2024-09-29 21:48:37.665787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:18.749 [2024-09-29 21:48:37.665794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:18.749 [2024-09-29 21:48:37.665799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:18.749 [2024-09-29 21:48:37.665807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:18.749 [2024-09-29 21:48:37.665813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.749 [2024-09-29 21:48:37.665821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:18.749 [2024-09-29 21:48:37.665828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:17:18.749 [2024-09-29 21:48:37.665834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.749 [2024-09-29 21:48:37.665887] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:18.749 [2024-09-29 21:48:37.665901] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:21.282 [2024-09-29 21:48:39.790276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.790353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:21.282 [2024-09-29 21:48:39.790370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2124.376 ms 00:17:21.282 [2024-09-29 21:48:39.790381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.830659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.830740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.282 [2024-09-29 21:48:39.830762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.999 ms 00:17:21.282 [2024-09-29 21:48:39.830778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.831029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.831058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:21.282 [2024-09-29 21:48:39.831073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:21.282 [2024-09-29 21:48:39.831092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.865339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.866198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.282 [2024-09-29 21:48:39.866224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.206 ms 00:17:21.282 [2024-09-29 21:48:39.866236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.866348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.866367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.282 [2024-09-29 21:48:39.866377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:21.282 [2024-09-29 21:48:39.866401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.866818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.866843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.282 [2024-09-29 21:48:39.866853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:17:21.282 [2024-09-29 21:48:39.866864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.866986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.867002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.282 [2024-09-29 21:48:39.867013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:21.282 [2024-09-29 21:48:39.867025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.883063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.883098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.282 [2024-09-29 21:48:39.883112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.010 ms 00:17:21.282 [2024-09-29 21:48:39.883122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.895497] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:21.282 [2024-09-29 21:48:39.913400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.913586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:21.282 [2024-09-29 21:48:39.913652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.167 ms 00:17:21.282 [2024-09-29 21:48:39.913677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.979425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.979680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:21.282 [2024-09-29 21:48:39.979753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.634 ms 00:17:21.282 [2024-09-29 21:48:39.979780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:39.980039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:39.980077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:21.282 [2024-09-29 21:48:39.980153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:17:21.282 [2024-09-29 21:48:39.980182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.005047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:40.005258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:21.282 [2024-09-29 21:48:40.005323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.814 ms 00:17:21.282 [2024-09-29 21:48:40.005336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.028189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:40.028235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:21.282 [2024-09-29 21:48:40.028252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.739 ms 00:17:21.282 [2024-09-29 21:48:40.028260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.028919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:40.028947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:21.282 [2024-09-29 21:48:40.028960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:17:21.282 [2024-09-29 21:48:40.028968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.106671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:40.106736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:21.282 [2024-09-29 21:48:40.106758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.660 ms 00:17:21.282 [2024-09-29 21:48:40.106767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.131638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:40.131695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:21.282 [2024-09-29 21:48:40.131711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.769 ms 00:17:21.282 [2024-09-29 21:48:40.131721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.155616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:40.155661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:21.282 [2024-09-29 21:48:40.155677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.833 ms 00:17:21.282 [2024-09-29 21:48:40.155685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.178798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:40.178841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:21.282 [2024-09-29 21:48:40.178856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.037 ms 00:17:21.282 [2024-09-29 21:48:40.178865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.178945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:40.178957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:21.282 [2024-09-29 21:48:40.178971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:21.282 [2024-09-29 21:48:40.178991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.179074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.282 [2024-09-29 21:48:40.179089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:21.282 [2024-09-29 21:48:40.179104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:21.282 [2024-09-29 21:48:40.179112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.282 [2024-09-29 21:48:40.180072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:21.282 { 00:17:21.282 "name": "ftl0", 00:17:21.282 "uuid": "1605127c-fec2-4388-9e4a-c6c135288dc4" 00:17:21.283 } 00:17:21.283 [2024-09-29 21:48:40.183042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2532.906 ms, result 0 00:17:21.283 [2024-09-29 21:48:40.183967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.283 21:48:40 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:21.283 21:48:40 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:21.283 21:48:40 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:21.283 21:48:40 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:21.283 21:48:40 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:21.283 21:48:40 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:21.283 21:48:40 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:21.541 21:48:40 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:21.799 [ 00:17:21.799 { 00:17:21.799 "name": "ftl0", 00:17:21.799 "aliases": [ 00:17:21.799 "1605127c-fec2-4388-9e4a-c6c135288dc4" 00:17:21.799 ], 00:17:21.799 "product_name": "FTL disk", 00:17:21.799 "block_size": 4096, 00:17:21.799 "num_blocks": 23592960, 00:17:21.799 "uuid": "1605127c-fec2-4388-9e4a-c6c135288dc4", 00:17:21.799 "assigned_rate_limits": { 00:17:21.799 "rw_ios_per_sec": 0, 00:17:21.799 "rw_mbytes_per_sec": 0, 00:17:21.799 "r_mbytes_per_sec": 0, 00:17:21.799 "w_mbytes_per_sec": 0 00:17:21.799 }, 00:17:21.799 "claimed": false, 00:17:21.799 "zoned": false, 00:17:21.799 "supported_io_types": { 00:17:21.799 "read": true, 00:17:21.799 "write": true, 00:17:21.799 "unmap": true, 00:17:21.799 "flush": true, 00:17:21.799 "reset": false, 00:17:21.799 "nvme_admin": false, 00:17:21.799 "nvme_io": false, 00:17:21.799 "nvme_io_md": false, 00:17:21.799 "write_zeroes": true, 00:17:21.799 "zcopy": false, 00:17:21.799 "get_zone_info": false, 00:17:21.799 "zone_management": false, 00:17:21.799 "zone_append": false, 00:17:21.799 "compare": false, 00:17:21.799 "compare_and_write": false, 00:17:21.799 "abort": false, 00:17:21.799 "seek_hole": false, 00:17:21.799 "seek_data": false, 00:17:21.799 "copy": false, 00:17:21.799 "nvme_iov_md": false 00:17:21.799 }, 00:17:21.799 "driver_specific": { 00:17:21.799 "ftl": { 00:17:21.799 "base_bdev": "6263d610-54d3-4633-a9f9-0061a52a717b", 00:17:21.799 "cache": "nvc0n1p0" 00:17:21.799 } 00:17:21.799 } 00:17:21.799 } 00:17:21.799 ] 00:17:21.799 21:48:40 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:21.799 21:48:40 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:21.799 21:48:40 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:22.058 21:48:40 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:22.058 21:48:40 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:22.058 21:48:41 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:22.058 { 00:17:22.058 "name": "ftl0", 00:17:22.058 "aliases": [ 00:17:22.058 "1605127c-fec2-4388-9e4a-c6c135288dc4" 00:17:22.058 ], 00:17:22.058 "product_name": "FTL disk", 00:17:22.058 "block_size": 4096, 00:17:22.058 "num_blocks": 23592960, 00:17:22.058 "uuid": "1605127c-fec2-4388-9e4a-c6c135288dc4", 00:17:22.058 "assigned_rate_limits": { 00:17:22.058 "rw_ios_per_sec": 0, 00:17:22.058 "rw_mbytes_per_sec": 0, 00:17:22.058 "r_mbytes_per_sec": 0, 00:17:22.058 "w_mbytes_per_sec": 0 00:17:22.058 }, 00:17:22.058 "claimed": false, 00:17:22.058 "zoned": false, 00:17:22.058 "supported_io_types": { 00:17:22.058 "read": true, 00:17:22.058 "write": true, 00:17:22.058 "unmap": true, 00:17:22.058 "flush": true, 00:17:22.058 "reset": false, 00:17:22.058 "nvme_admin": false, 00:17:22.058 "nvme_io": false, 00:17:22.058 "nvme_io_md": false, 00:17:22.058 "write_zeroes": true, 00:17:22.058 "zcopy": false, 00:17:22.058 "get_zone_info": false, 00:17:22.058 "zone_management": false, 00:17:22.058 "zone_append": false, 00:17:22.058 "compare": false, 00:17:22.058 "compare_and_write": false, 00:17:22.058 "abort": false, 00:17:22.058 "seek_hole": false, 00:17:22.058 "seek_data": false, 00:17:22.058 "copy": false, 00:17:22.058 "nvme_iov_md": false 00:17:22.058 }, 00:17:22.058 "driver_specific": { 00:17:22.058 "ftl": { 00:17:22.058 "base_bdev": "6263d610-54d3-4633-a9f9-0061a52a717b", 00:17:22.058 "cache": "nvc0n1p0" 00:17:22.058 } 00:17:22.058 } 00:17:22.058 } 00:17:22.058 ]' 00:17:22.058 21:48:41 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:22.317 21:48:41 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:22.317 21:48:41 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:22.317 [2024-09-29 21:48:41.259710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.317 [2024-09-29 21:48:41.259771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:22.317 [2024-09-29 21:48:41.259784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:22.317 [2024-09-29 21:48:41.259794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.317 [2024-09-29 21:48:41.259818] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:22.317 [2024-09-29 21:48:41.261984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.317 [2024-09-29 21:48:41.262014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:22.317 [2024-09-29 21:48:41.262026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.149 ms 00:17:22.317 [2024-09-29 21:48:41.262033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.317 [2024-09-29 21:48:41.262546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.317 [2024-09-29 21:48:41.262562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:22.317 [2024-09-29 21:48:41.262571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:17:22.317 [2024-09-29 21:48:41.262577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.317 [2024-09-29 21:48:41.265339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.317 [2024-09-29 21:48:41.265357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:22.317 [2024-09-29 21:48:41.265366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.734 ms 00:17:22.317 [2024-09-29 21:48:41.265373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.317 [2024-09-29 21:48:41.270737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.317 [2024-09-29 21:48:41.270761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:22.317 [2024-09-29 21:48:41.270773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.296 ms 00:17:22.317 [2024-09-29 21:48:41.270780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.317 [2024-09-29 21:48:41.289683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.317 [2024-09-29 21:48:41.289714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:22.317 [2024-09-29 21:48:41.289728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.827 ms 00:17:22.317 [2024-09-29 21:48:41.289735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.587 [2024-09-29 21:48:41.302378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.587 [2024-09-29 21:48:41.302411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:22.587 [2024-09-29 21:48:41.302424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.584 ms 00:17:22.587 [2024-09-29 21:48:41.302433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.587 [2024-09-29 21:48:41.302608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.587 [2024-09-29 21:48:41.302617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:22.587 [2024-09-29 21:48:41.302626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:22.587 [2024-09-29 21:48:41.302632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.587 [2024-09-29 21:48:41.320654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.587 [2024-09-29 21:48:41.320685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:22.587 [2024-09-29 21:48:41.320695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.991 ms 00:17:22.587 [2024-09-29 21:48:41.320701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.587 [2024-09-29 21:48:41.338192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.587 [2024-09-29 21:48:41.338219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:22.587 [2024-09-29 21:48:41.338231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.426 ms 00:17:22.587 [2024-09-29 21:48:41.338237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.587 [2024-09-29 21:48:41.355256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.587 [2024-09-29 21:48:41.355284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:22.587 [2024-09-29 21:48:41.355293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.801 ms 00:17:22.587 [2024-09-29 21:48:41.355299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.587 [2024-09-29 21:48:41.372355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.587 [2024-09-29 21:48:41.372382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:22.587 [2024-09-29 21:48:41.372404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.945 ms 00:17:22.587 [2024-09-29 21:48:41.372410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.587 [2024-09-29 21:48:41.372460] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:22.587 [2024-09-29 21:48:41.372474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:22.587 [2024-09-29 21:48:41.372870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.372999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:22.588 [2024-09-29 21:48:41.373215] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:22.588 [2024-09-29 21:48:41.373225] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1605127c-fec2-4388-9e4a-c6c135288dc4 00:17:22.588 [2024-09-29 21:48:41.373231] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:22.588 [2024-09-29 21:48:41.373239] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:22.588 [2024-09-29 21:48:41.373246] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:22.588 [2024-09-29 21:48:41.373254] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:22.588 [2024-09-29 21:48:41.373259] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:22.588 [2024-09-29 21:48:41.373267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:22.588 [2024-09-29 21:48:41.373273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:22.588 [2024-09-29 21:48:41.373279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:22.588 [2024-09-29 21:48:41.373284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:22.588 [2024-09-29 21:48:41.373290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.588 [2024-09-29 21:48:41.373296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:22.588 [2024-09-29 21:48:41.373305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:17:22.588 [2024-09-29 21:48:41.373313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.383167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.588 [2024-09-29 21:48:41.383341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:22.588 [2024-09-29 21:48:41.383359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.823 ms 00:17:22.588 [2024-09-29 21:48:41.383366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.383724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.588 [2024-09-29 21:48:41.383739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:22.588 [2024-09-29 21:48:41.383749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:17:22.588 [2024-09-29 21:48:41.383756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.419697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.588 [2024-09-29 21:48:41.419743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.588 [2024-09-29 21:48:41.419756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.588 [2024-09-29 21:48:41.419762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.419874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.588 [2024-09-29 21:48:41.419883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.588 [2024-09-29 21:48:41.419893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.588 [2024-09-29 21:48:41.419900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.419964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.588 [2024-09-29 21:48:41.419973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.588 [2024-09-29 21:48:41.419983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.588 [2024-09-29 21:48:41.419989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.420015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.588 [2024-09-29 21:48:41.420021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.588 [2024-09-29 21:48:41.420029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.588 [2024-09-29 21:48:41.420037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.486658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.588 [2024-09-29 21:48:41.486717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.588 [2024-09-29 21:48:41.486731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.588 [2024-09-29 21:48:41.486738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.538205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.588 [2024-09-29 21:48:41.538453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.588 [2024-09-29 21:48:41.538472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.588 [2024-09-29 21:48:41.538482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.538582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.588 [2024-09-29 21:48:41.538592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.588 [2024-09-29 21:48:41.538602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.588 [2024-09-29 21:48:41.538609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.588 [2024-09-29 21:48:41.538653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.588 [2024-09-29 21:48:41.538660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.588 [2024-09-29 21:48:41.538681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.589 [2024-09-29 21:48:41.538688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.589 [2024-09-29 21:48:41.538795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.589 [2024-09-29 21:48:41.538805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.589 [2024-09-29 21:48:41.538814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.589 [2024-09-29 21:48:41.538821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.589 [2024-09-29 21:48:41.538868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.589 [2024-09-29 21:48:41.538876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:22.589 [2024-09-29 21:48:41.538884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.589 [2024-09-29 21:48:41.538892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.589 [2024-09-29 21:48:41.538943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.589 [2024-09-29 21:48:41.538951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.589 [2024-09-29 21:48:41.538960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.589 [2024-09-29 21:48:41.538966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.589 [2024-09-29 21:48:41.539018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.589 [2024-09-29 21:48:41.539029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.589 [2024-09-29 21:48:41.539037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.589 [2024-09-29 21:48:41.539043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.589 [2024-09-29 21:48:41.539231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 279.508 ms, result 0 00:17:22.589 true 00:17:22.589 21:48:41 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 74039 00:17:22.589 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74039 ']' 00:17:22.589 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74039 00:17:22.589 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:22.859 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.859 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74039 00:17:22.859 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:22.859 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:22.859 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74039' 00:17:22.859 killing process with pid 74039 00:17:22.859 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74039 00:17:22.859 21:48:41 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74039 00:17:29.424 21:48:47 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:29.683 65536+0 records in 00:17:29.683 65536+0 records out 00:17:29.683 268435456 bytes (268 MB, 256 MiB) copied, 1.09073 s, 246 MB/s 00:17:29.683 21:48:48 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:29.683 [2024-09-29 21:48:48.506032] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:29.683 [2024-09-29 21:48:48.506172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74223 ] 00:17:29.683 [2024-09-29 21:48:48.653364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.942 [2024-09-29 21:48:48.835081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.201 [2024-09-29 21:48:49.064200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.201 [2024-09-29 21:48:49.064270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.459 [2024-09-29 21:48:49.218485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.218554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:30.459 [2024-09-29 21:48:49.218570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:30.459 [2024-09-29 21:48:49.218579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.221566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.221779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.459 [2024-09-29 21:48:49.221798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:17:30.459 [2024-09-29 21:48:49.221811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.221983] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:30.459 [2024-09-29 21:48:49.222732] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:30.459 [2024-09-29 21:48:49.222761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.222773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.459 [2024-09-29 21:48:49.222782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:17:30.459 [2024-09-29 21:48:49.222790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.224214] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:30.459 [2024-09-29 21:48:49.237048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.237220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:30.459 [2024-09-29 21:48:49.237237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.835 ms 00:17:30.459 [2024-09-29 21:48:49.237247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.237334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.237346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:30.459 [2024-09-29 21:48:49.237357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:30.459 [2024-09-29 21:48:49.237365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.244053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.244197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.459 [2024-09-29 21:48:49.244212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.626 ms 00:17:30.459 [2024-09-29 21:48:49.244221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.244331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.244344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.459 [2024-09-29 21:48:49.244353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:30.459 [2024-09-29 21:48:49.244360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.244405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.244415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:30.459 [2024-09-29 21:48:49.244424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:30.459 [2024-09-29 21:48:49.244432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.244455] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:30.459 [2024-09-29 21:48:49.248050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.248079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.459 [2024-09-29 21:48:49.248089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:17:30.459 [2024-09-29 21:48:49.248096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.248142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.248155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:30.459 [2024-09-29 21:48:49.248164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:30.459 [2024-09-29 21:48:49.248172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.248192] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:30.459 [2024-09-29 21:48:49.248212] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:30.459 [2024-09-29 21:48:49.248249] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:30.459 [2024-09-29 21:48:49.248264] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:30.459 [2024-09-29 21:48:49.248371] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:30.459 [2024-09-29 21:48:49.248383] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:30.459 [2024-09-29 21:48:49.248407] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:30.459 [2024-09-29 21:48:49.248417] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:30.459 [2024-09-29 21:48:49.248427] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:30.459 [2024-09-29 21:48:49.248435] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:30.459 [2024-09-29 21:48:49.248443] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:30.459 [2024-09-29 21:48:49.248452] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:30.459 [2024-09-29 21:48:49.248460] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:30.459 [2024-09-29 21:48:49.248468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.248479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:30.459 [2024-09-29 21:48:49.248487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:17:30.459 [2024-09-29 21:48:49.248494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.248595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.459 [2024-09-29 21:48:49.248605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:30.459 [2024-09-29 21:48:49.248613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:30.459 [2024-09-29 21:48:49.248621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.459 [2024-09-29 21:48:49.248723] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:30.459 [2024-09-29 21:48:49.248734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:30.459 [2024-09-29 21:48:49.248746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.459 [2024-09-29 21:48:49.248754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.459 [2024-09-29 21:48:49.248764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:30.459 [2024-09-29 21:48:49.248771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:30.459 [2024-09-29 21:48:49.248778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:30.459 [2024-09-29 21:48:49.248785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:30.459 [2024-09-29 21:48:49.248792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:30.459 [2024-09-29 21:48:49.248799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.459 [2024-09-29 21:48:49.248806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:30.459 [2024-09-29 21:48:49.248821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:30.459 [2024-09-29 21:48:49.248828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.459 [2024-09-29 21:48:49.248834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:30.459 [2024-09-29 21:48:49.248841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:30.459 [2024-09-29 21:48:49.248848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.459 [2024-09-29 21:48:49.248855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:30.459 [2024-09-29 21:48:49.248863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:30.459 [2024-09-29 21:48:49.248870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.459 [2024-09-29 21:48:49.248883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:30.459 [2024-09-29 21:48:49.248891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:30.459 [2024-09-29 21:48:49.248897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.459 [2024-09-29 21:48:49.248904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:30.459 [2024-09-29 21:48:49.248911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:30.459 [2024-09-29 21:48:49.248917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.459 [2024-09-29 21:48:49.248924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:30.459 [2024-09-29 21:48:49.248931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:30.459 [2024-09-29 21:48:49.248938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.459 [2024-09-29 21:48:49.248945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:30.459 [2024-09-29 21:48:49.248952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:30.459 [2024-09-29 21:48:49.248958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.460 [2024-09-29 21:48:49.248966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:30.460 [2024-09-29 21:48:49.248973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:30.460 [2024-09-29 21:48:49.248980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.460 [2024-09-29 21:48:49.248987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:30.460 [2024-09-29 21:48:49.248993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:30.460 [2024-09-29 21:48:49.249000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.460 [2024-09-29 21:48:49.249007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:30.460 [2024-09-29 21:48:49.249014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:30.460 [2024-09-29 21:48:49.249020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.460 [2024-09-29 21:48:49.249027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:30.460 [2024-09-29 21:48:49.249034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:30.460 [2024-09-29 21:48:49.249040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.460 [2024-09-29 21:48:49.249047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:30.460 [2024-09-29 21:48:49.249054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:30.460 [2024-09-29 21:48:49.249062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.460 [2024-09-29 21:48:49.249069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.460 [2024-09-29 21:48:49.249076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:30.460 [2024-09-29 21:48:49.249083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:30.460 [2024-09-29 21:48:49.249089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:30.460 [2024-09-29 21:48:49.249096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:30.460 [2024-09-29 21:48:49.249105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:30.460 [2024-09-29 21:48:49.249113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:30.460 [2024-09-29 21:48:49.249121] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:30.460 [2024-09-29 21:48:49.249134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.460 [2024-09-29 21:48:49.249144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:30.460 [2024-09-29 21:48:49.249151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:30.460 [2024-09-29 21:48:49.249158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:30.460 [2024-09-29 21:48:49.249164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:30.460 [2024-09-29 21:48:49.249172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:30.460 [2024-09-29 21:48:49.249179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:30.460 [2024-09-29 21:48:49.249186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:30.460 [2024-09-29 21:48:49.249193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:30.460 [2024-09-29 21:48:49.249202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:30.460 [2024-09-29 21:48:49.249209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:30.460 [2024-09-29 21:48:49.249216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:30.460 [2024-09-29 21:48:49.249224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:30.460 [2024-09-29 21:48:49.249230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:30.460 [2024-09-29 21:48:49.249237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:30.460 [2024-09-29 21:48:49.249245] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:30.460 [2024-09-29 21:48:49.249253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.460 [2024-09-29 21:48:49.249261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:30.460 [2024-09-29 21:48:49.249268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:30.460 [2024-09-29 21:48:49.249276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:30.460 [2024-09-29 21:48:49.249283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:30.460 [2024-09-29 21:48:49.249290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.249300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:30.460 [2024-09-29 21:48:49.249309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:17:30.460 [2024-09-29 21:48:49.249316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.291887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.291940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.460 [2024-09-29 21:48:49.291954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.518 ms 00:17:30.460 [2024-09-29 21:48:49.291963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.292122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.292135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.460 [2024-09-29 21:48:49.292145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:30.460 [2024-09-29 21:48:49.292153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.324870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.324912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.460 [2024-09-29 21:48:49.324923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.695 ms 00:17:30.460 [2024-09-29 21:48:49.324931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.325030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.325040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.460 [2024-09-29 21:48:49.325049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:30.460 [2024-09-29 21:48:49.325057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.325499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.325515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.460 [2024-09-29 21:48:49.325525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:17:30.460 [2024-09-29 21:48:49.325533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.325675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.325685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.460 [2024-09-29 21:48:49.325694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:17:30.460 [2024-09-29 21:48:49.325701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.339503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.339536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.460 [2024-09-29 21:48:49.339546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.779 ms 00:17:30.460 [2024-09-29 21:48:49.339554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.352603] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:30.460 [2024-09-29 21:48:49.352640] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:30.460 [2024-09-29 21:48:49.352655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.352664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:30.460 [2024-09-29 21:48:49.352673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.988 ms 00:17:30.460 [2024-09-29 21:48:49.352681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.377179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.377216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:30.460 [2024-09-29 21:48:49.377228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.420 ms 00:17:30.460 [2024-09-29 21:48:49.377241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.388991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.389022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:30.460 [2024-09-29 21:48:49.389032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.687 ms 00:17:30.460 [2024-09-29 21:48:49.389040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.400365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.400404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:30.460 [2024-09-29 21:48:49.400415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.260 ms 00:17:30.460 [2024-09-29 21:48:49.400422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.460 [2024-09-29 21:48:49.401060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.460 [2024-09-29 21:48:49.401088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.460 [2024-09-29 21:48:49.401098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:17:30.460 [2024-09-29 21:48:49.401106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.719 [2024-09-29 21:48:49.460674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.719 [2024-09-29 21:48:49.460738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:30.719 [2024-09-29 21:48:49.460752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.542 ms 00:17:30.719 [2024-09-29 21:48:49.460761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.719 [2024-09-29 21:48:49.471364] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:30.719 [2024-09-29 21:48:49.488665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.719 [2024-09-29 21:48:49.488718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.719 [2024-09-29 21:48:49.488732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.784 ms 00:17:30.719 [2024-09-29 21:48:49.488740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.719 [2024-09-29 21:48:49.488851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.719 [2024-09-29 21:48:49.488862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:30.719 [2024-09-29 21:48:49.488872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:30.719 [2024-09-29 21:48:49.488880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.719 [2024-09-29 21:48:49.488941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.719 [2024-09-29 21:48:49.488950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.719 [2024-09-29 21:48:49.488961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:30.719 [2024-09-29 21:48:49.488968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.719 [2024-09-29 21:48:49.488991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.719 [2024-09-29 21:48:49.489000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:30.719 [2024-09-29 21:48:49.489007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:30.719 [2024-09-29 21:48:49.489015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.719 [2024-09-29 21:48:49.489050] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:30.719 [2024-09-29 21:48:49.489060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.719 [2024-09-29 21:48:49.489068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:30.719 [2024-09-29 21:48:49.489075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:30.719 [2024-09-29 21:48:49.489086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.719 [2024-09-29 21:48:49.513356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.719 [2024-09-29 21:48:49.513408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:30.719 [2024-09-29 21:48:49.513422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.249 ms 00:17:30.719 [2024-09-29 21:48:49.513430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.719 [2024-09-29 21:48:49.513531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.719 [2024-09-29 21:48:49.513542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:30.719 [2024-09-29 21:48:49.513555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:30.719 [2024-09-29 21:48:49.513562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.719 [2024-09-29 21:48:49.514755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.719 [2024-09-29 21:48:49.518137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.932 ms, result 0 00:17:30.719 [2024-09-29 21:48:49.518980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.719 [2024-09-29 21:48:49.531766] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:36.764  Copying: 40/256 [MB] (40 MBps) Copying: 82/256 [MB] (41 MBps) Copying: 126/256 [MB] (44 MBps) Copying: 169/256 [MB] (42 MBps) Copying: 210/256 [MB] (41 MBps) Copying: 253/256 [MB] (42 MBps) Copying: 256/256 [MB] (average 42 MBps)[2024-09-29 21:48:55.601290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:36.764 [2024-09-29 21:48:55.610746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.610792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:36.764 [2024-09-29 21:48:55.610807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:36.764 [2024-09-29 21:48:55.610815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.610837] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:36.764 [2024-09-29 21:48:55.613636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.613666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:36.764 [2024-09-29 21:48:55.613678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:17:36.764 [2024-09-29 21:48:55.613686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.615658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.615689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:36.764 [2024-09-29 21:48:55.615704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:17:36.764 [2024-09-29 21:48:55.615719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.622725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.622755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:36.764 [2024-09-29 21:48:55.622765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.988 ms 00:17:36.764 [2024-09-29 21:48:55.622773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.629724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.629751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:36.764 [2024-09-29 21:48:55.629761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.910 ms 00:17:36.764 [2024-09-29 21:48:55.629774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.653263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.653527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:36.764 [2024-09-29 21:48:55.653546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.444 ms 00:17:36.764 [2024-09-29 21:48:55.653555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.668153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.668186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:36.764 [2024-09-29 21:48:55.668197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.550 ms 00:17:36.764 [2024-09-29 21:48:55.668205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.668344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.668355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:36.764 [2024-09-29 21:48:55.668365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:36.764 [2024-09-29 21:48:55.668372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.691326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.691371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:36.764 [2024-09-29 21:48:55.691381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.921 ms 00:17:36.764 [2024-09-29 21:48:55.691399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.714034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.714066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:36.764 [2024-09-29 21:48:55.714076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.597 ms 00:17:36.764 [2024-09-29 21:48:55.714083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.764 [2024-09-29 21:48:55.736227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.764 [2024-09-29 21:48:55.736260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:36.764 [2024-09-29 21:48:55.736270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.109 ms 00:17:36.764 [2024-09-29 21:48:55.736278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.024 [2024-09-29 21:48:55.758963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.024 [2024-09-29 21:48:55.758999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:37.024 [2024-09-29 21:48:55.759009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.620 ms 00:17:37.024 [2024-09-29 21:48:55.759017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.024 [2024-09-29 21:48:55.759053] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:37.024 [2024-09-29 21:48:55.759069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:37.024 [2024-09-29 21:48:55.759516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:37.025 [2024-09-29 21:48:55.759906] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:37.025 [2024-09-29 21:48:55.759914] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1605127c-fec2-4388-9e4a-c6c135288dc4 00:17:37.025 [2024-09-29 21:48:55.759922] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:37.025 [2024-09-29 21:48:55.759929] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:37.025 [2024-09-29 21:48:55.759936] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:37.025 [2024-09-29 21:48:55.759944] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:37.025 [2024-09-29 21:48:55.759956] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:37.025 [2024-09-29 21:48:55.759964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:37.025 [2024-09-29 21:48:55.759971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:37.025 [2024-09-29 21:48:55.759978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:37.025 [2024-09-29 21:48:55.759985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:37.025 [2024-09-29 21:48:55.759992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.025 [2024-09-29 21:48:55.760000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:37.025 [2024-09-29 21:48:55.760010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:17:37.025 [2024-09-29 21:48:55.760018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.773058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.025 [2024-09-29 21:48:55.773090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:37.025 [2024-09-29 21:48:55.773106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.008 ms 00:17:37.025 [2024-09-29 21:48:55.773114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.773520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.025 [2024-09-29 21:48:55.773542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:37.025 [2024-09-29 21:48:55.773551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:17:37.025 [2024-09-29 21:48:55.773560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.805471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.025 [2024-09-29 21:48:55.805515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.025 [2024-09-29 21:48:55.805526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.025 [2024-09-29 21:48:55.805536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.805623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.025 [2024-09-29 21:48:55.805634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.025 [2024-09-29 21:48:55.805643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.025 [2024-09-29 21:48:55.805652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.805697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.025 [2024-09-29 21:48:55.805708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.025 [2024-09-29 21:48:55.805721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.025 [2024-09-29 21:48:55.805730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.805748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.025 [2024-09-29 21:48:55.805757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.025 [2024-09-29 21:48:55.805766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.025 [2024-09-29 21:48:55.805775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.886857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.025 [2024-09-29 21:48:55.887103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.025 [2024-09-29 21:48:55.887127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.025 [2024-09-29 21:48:55.887135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.953561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.025 [2024-09-29 21:48:55.953801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.025 [2024-09-29 21:48:55.953821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.025 [2024-09-29 21:48:55.953831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.953902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.025 [2024-09-29 21:48:55.953913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.025 [2024-09-29 21:48:55.953923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.025 [2024-09-29 21:48:55.953936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.953966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.025 [2024-09-29 21:48:55.953975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.025 [2024-09-29 21:48:55.953983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.025 [2024-09-29 21:48:55.953990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.025 [2024-09-29 21:48:55.954089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.025 [2024-09-29 21:48:55.954100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.026 [2024-09-29 21:48:55.954108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.026 [2024-09-29 21:48:55.954116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.026 [2024-09-29 21:48:55.954158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.026 [2024-09-29 21:48:55.954167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:37.026 [2024-09-29 21:48:55.954176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.026 [2024-09-29 21:48:55.954183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.026 [2024-09-29 21:48:55.954226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.026 [2024-09-29 21:48:55.954235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.026 [2024-09-29 21:48:55.954243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.026 [2024-09-29 21:48:55.954251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.026 [2024-09-29 21:48:55.954299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.026 [2024-09-29 21:48:55.954309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.026 [2024-09-29 21:48:55.954318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.026 [2024-09-29 21:48:55.954325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.026 [2024-09-29 21:48:55.954498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.739 ms, result 0 00:17:37.962 00:17:37.962 00:17:37.962 21:48:56 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74320 00:17:37.962 21:48:56 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74320 00:17:37.962 21:48:56 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:37.962 21:48:56 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74320 ']' 00:17:37.962 21:48:56 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.962 21:48:56 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.962 21:48:56 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.962 21:48:56 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.962 21:48:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:38.221 [2024-09-29 21:48:56.987400] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:38.221 [2024-09-29 21:48:56.987877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74320 ] 00:17:38.221 [2024-09-29 21:48:57.136879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.479 [2024-09-29 21:48:57.308313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.046 21:48:57 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:39.046 21:48:57 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:39.046 21:48:57 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:39.306 [2024-09-29 21:48:58.047642] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.306 [2024-09-29 21:48:58.047702] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.306 [2024-09-29 21:48:58.217611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.306 [2024-09-29 21:48:58.217661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:39.306 [2024-09-29 21:48:58.217674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:39.306 [2024-09-29 21:48:58.217683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.306 [2024-09-29 21:48:58.219867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.306 [2024-09-29 21:48:58.219897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.306 [2024-09-29 21:48:58.219909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:17:39.306 [2024-09-29 21:48:58.219915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.306 [2024-09-29 21:48:58.219975] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:39.306 [2024-09-29 21:48:58.220503] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:39.306 [2024-09-29 21:48:58.220523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.306 [2024-09-29 21:48:58.220530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.306 [2024-09-29 21:48:58.220538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:17:39.306 [2024-09-29 21:48:58.220543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.306 [2024-09-29 21:48:58.221972] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:39.307 [2024-09-29 21:48:58.232370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.307 [2024-09-29 21:48:58.232410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:39.307 [2024-09-29 21:48:58.232420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.402 ms 00:17:39.307 [2024-09-29 21:48:58.232429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.307 [2024-09-29 21:48:58.232514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.307 [2024-09-29 21:48:58.232527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:39.307 [2024-09-29 21:48:58.232534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:39.307 [2024-09-29 21:48:58.232542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.307 [2024-09-29 21:48:58.238870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.307 [2024-09-29 21:48:58.238904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.307 [2024-09-29 21:48:58.238912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.288 ms 00:17:39.307 [2024-09-29 21:48:58.238920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.307 [2024-09-29 21:48:58.239008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.307 [2024-09-29 21:48:58.239019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.307 [2024-09-29 21:48:58.239026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:39.307 [2024-09-29 21:48:58.239033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.307 [2024-09-29 21:48:58.239053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.307 [2024-09-29 21:48:58.239063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:39.307 [2024-09-29 21:48:58.239069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:39.307 [2024-09-29 21:48:58.239076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.307 [2024-09-29 21:48:58.239096] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:39.307 [2024-09-29 21:48:58.242171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.307 [2024-09-29 21:48:58.242195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.307 [2024-09-29 21:48:58.242209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:17:39.307 [2024-09-29 21:48:58.242217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.307 [2024-09-29 21:48:58.242251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.307 [2024-09-29 21:48:58.242258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:39.307 [2024-09-29 21:48:58.242267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:39.307 [2024-09-29 21:48:58.242274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.307 [2024-09-29 21:48:58.242293] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:39.307 [2024-09-29 21:48:58.242310] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:39.307 [2024-09-29 21:48:58.242344] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:39.307 [2024-09-29 21:48:58.242361] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:39.307 [2024-09-29 21:48:58.242459] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:39.307 [2024-09-29 21:48:58.242470] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:39.307 [2024-09-29 21:48:58.242480] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:39.307 [2024-09-29 21:48:58.242489] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:39.307 [2024-09-29 21:48:58.242497] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:39.307 [2024-09-29 21:48:58.242504] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:39.307 [2024-09-29 21:48:58.242511] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:39.307 [2024-09-29 21:48:58.242517] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:39.307 [2024-09-29 21:48:58.242529] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:39.307 [2024-09-29 21:48:58.242537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.307 [2024-09-29 21:48:58.242545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:39.307 [2024-09-29 21:48:58.242551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:39.307 [2024-09-29 21:48:58.242558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.307 [2024-09-29 21:48:58.242635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.307 [2024-09-29 21:48:58.242645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:39.307 [2024-09-29 21:48:58.242651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:39.307 [2024-09-29 21:48:58.242659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.307 [2024-09-29 21:48:58.242738] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:39.307 [2024-09-29 21:48:58.242749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:39.307 [2024-09-29 21:48:58.242756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:39.307 [2024-09-29 21:48:58.242764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:39.307 [2024-09-29 21:48:58.242779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:39.307 [2024-09-29 21:48:58.242794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:39.307 [2024-09-29 21:48:58.242800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:39.307 [2024-09-29 21:48:58.242813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:39.307 [2024-09-29 21:48:58.242820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:39.307 [2024-09-29 21:48:58.242825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:39.307 [2024-09-29 21:48:58.242831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:39.307 [2024-09-29 21:48:58.242836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:39.307 [2024-09-29 21:48:58.242843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:39.307 [2024-09-29 21:48:58.242857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:39.307 [2024-09-29 21:48:58.242869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:39.307 [2024-09-29 21:48:58.242882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.307 [2024-09-29 21:48:58.242895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:39.307 [2024-09-29 21:48:58.242904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.307 [2024-09-29 21:48:58.242917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:39.307 [2024-09-29 21:48:58.242922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.307 [2024-09-29 21:48:58.242934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:39.307 [2024-09-29 21:48:58.242942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:39.307 [2024-09-29 21:48:58.242955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:39.307 [2024-09-29 21:48:58.242960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:39.307 [2024-09-29 21:48:58.242966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:39.307 [2024-09-29 21:48:58.242971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:39.307 [2024-09-29 21:48:58.242979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:39.307 [2024-09-29 21:48:58.242984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:39.307 [2024-09-29 21:48:58.242990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:39.307 [2024-09-29 21:48:58.242996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:39.307 [2024-09-29 21:48:58.243013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.307 [2024-09-29 21:48:58.243019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:39.307 [2024-09-29 21:48:58.243026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:39.307 [2024-09-29 21:48:58.243031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.307 [2024-09-29 21:48:58.243038] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:39.307 [2024-09-29 21:48:58.243044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:39.307 [2024-09-29 21:48:58.243051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:39.307 [2024-09-29 21:48:58.243057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:39.307 [2024-09-29 21:48:58.243064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:39.307 [2024-09-29 21:48:58.243069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:39.307 [2024-09-29 21:48:58.243076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:39.307 [2024-09-29 21:48:58.243083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:39.307 [2024-09-29 21:48:58.243091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:39.307 [2024-09-29 21:48:58.243096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:39.307 [2024-09-29 21:48:58.243104] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:39.307 [2024-09-29 21:48:58.243111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:39.308 [2024-09-29 21:48:58.243121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:39.308 [2024-09-29 21:48:58.243128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:39.308 [2024-09-29 21:48:58.243136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:39.308 [2024-09-29 21:48:58.243141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:39.308 [2024-09-29 21:48:58.243148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:39.308 [2024-09-29 21:48:58.243154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:39.308 [2024-09-29 21:48:58.243160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:39.308 [2024-09-29 21:48:58.243166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:39.308 [2024-09-29 21:48:58.243175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:39.308 [2024-09-29 21:48:58.243181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:39.308 [2024-09-29 21:48:58.243188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:39.308 [2024-09-29 21:48:58.243194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:39.308 [2024-09-29 21:48:58.243200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:39.308 [2024-09-29 21:48:58.243206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:39.308 [2024-09-29 21:48:58.243213] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:39.308 [2024-09-29 21:48:58.243220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:39.308 [2024-09-29 21:48:58.243231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:39.308 [2024-09-29 21:48:58.243236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:39.308 [2024-09-29 21:48:58.243243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:39.308 [2024-09-29 21:48:58.243249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:39.308 [2024-09-29 21:48:58.243256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.308 [2024-09-29 21:48:58.243262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:39.308 [2024-09-29 21:48:58.243269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:17:39.308 [2024-09-29 21:48:58.243276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.308 [2024-09-29 21:48:58.267562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.308 [2024-09-29 21:48:58.267590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.308 [2024-09-29 21:48:58.267601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.222 ms 00:17:39.308 [2024-09-29 21:48:58.267608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.308 [2024-09-29 21:48:58.267700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.308 [2024-09-29 21:48:58.267707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:39.308 [2024-09-29 21:48:58.267715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:39.308 [2024-09-29 21:48:58.267721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.302531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.567 [2024-09-29 21:48:58.302567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.567 [2024-09-29 21:48:58.302580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.787 ms 00:17:39.567 [2024-09-29 21:48:58.302586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.302651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.567 [2024-09-29 21:48:58.302659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.567 [2024-09-29 21:48:58.302668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:39.567 [2024-09-29 21:48:58.302676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.303067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.567 [2024-09-29 21:48:58.303080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.567 [2024-09-29 21:48:58.303088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:17:39.567 [2024-09-29 21:48:58.303094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.303206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.567 [2024-09-29 21:48:58.303213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.567 [2024-09-29 21:48:58.303223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:39.567 [2024-09-29 21:48:58.303229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.319275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.567 [2024-09-29 21:48:58.319304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.567 [2024-09-29 21:48:58.319315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.025 ms 00:17:39.567 [2024-09-29 21:48:58.319323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.329557] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:39.567 [2024-09-29 21:48:58.329586] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:39.567 [2024-09-29 21:48:58.329598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.567 [2024-09-29 21:48:58.329605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:39.567 [2024-09-29 21:48:58.329614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.160 ms 00:17:39.567 [2024-09-29 21:48:58.329621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.348209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.567 [2024-09-29 21:48:58.348420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:39.567 [2024-09-29 21:48:58.348438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.528 ms 00:17:39.567 [2024-09-29 21:48:58.348450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.357775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.567 [2024-09-29 21:48:58.357808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:39.567 [2024-09-29 21:48:58.357822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.026 ms 00:17:39.567 [2024-09-29 21:48:58.357828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.366376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.567 [2024-09-29 21:48:58.366407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:39.567 [2024-09-29 21:48:58.366417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.501 ms 00:17:39.567 [2024-09-29 21:48:58.366423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.567 [2024-09-29 21:48:58.366905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.568 [2024-09-29 21:48:58.366921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:39.568 [2024-09-29 21:48:58.366930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:17:39.568 [2024-09-29 21:48:58.366938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.568 [2024-09-29 21:48:58.415229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.568 [2024-09-29 21:48:58.415274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:39.568 [2024-09-29 21:48:58.415288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.270 ms 00:17:39.568 [2024-09-29 21:48:58.415297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.568 [2024-09-29 21:48:58.423401] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:39.568 [2024-09-29 21:48:58.438210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.568 [2024-09-29 21:48:58.438250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:39.568 [2024-09-29 21:48:58.438261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.844 ms 00:17:39.568 [2024-09-29 21:48:58.438269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.568 [2024-09-29 21:48:58.438363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.568 [2024-09-29 21:48:58.438373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:39.568 [2024-09-29 21:48:58.438380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:39.568 [2024-09-29 21:48:58.438408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.568 [2024-09-29 21:48:58.438465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.568 [2024-09-29 21:48:58.438476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:39.568 [2024-09-29 21:48:58.438483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:39.568 [2024-09-29 21:48:58.438493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.568 [2024-09-29 21:48:58.438514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.568 [2024-09-29 21:48:58.438525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:39.568 [2024-09-29 21:48:58.438531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:39.568 [2024-09-29 21:48:58.438539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.568 [2024-09-29 21:48:58.438568] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:39.568 [2024-09-29 21:48:58.438579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.568 [2024-09-29 21:48:58.438585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:39.568 [2024-09-29 21:48:58.438593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:39.568 [2024-09-29 21:48:58.438599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.568 [2024-09-29 21:48:58.457400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.568 [2024-09-29 21:48:58.457564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:39.568 [2024-09-29 21:48:58.457583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.781 ms 00:17:39.568 [2024-09-29 21:48:58.457592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.568 [2024-09-29 21:48:58.457668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.568 [2024-09-29 21:48:58.457677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:39.568 [2024-09-29 21:48:58.457685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:39.568 [2024-09-29 21:48:58.457691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.568 [2024-09-29 21:48:58.458513] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:39.568 [2024-09-29 21:48:58.460780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 240.588 ms, result 0 00:17:39.568 [2024-09-29 21:48:58.461698] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.568 Some configs were skipped because the RPC state that can call them passed over. 00:17:39.568 21:48:58 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:39.826 [2024-09-29 21:48:58.690083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.826 [2024-09-29 21:48:58.690272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:39.826 [2024-09-29 21:48:58.690320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:17:39.826 [2024-09-29 21:48:58.690342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.826 [2024-09-29 21:48:58.690384] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.929 ms, result 0 00:17:39.826 true 00:17:39.826 21:48:58 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:40.084 [2024-09-29 21:48:58.894995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.084 [2024-09-29 21:48:58.895334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:40.084 [2024-09-29 21:48:58.895428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.053 ms 00:17:40.084 [2024-09-29 21:48:58.895456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.085 [2024-09-29 21:48:58.895577] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.616 ms, result 0 00:17:40.085 true 00:17:40.085 21:48:58 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74320 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74320 ']' 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74320 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74320 00:17:40.085 killing process with pid 74320 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74320' 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74320 00:17:40.085 21:48:58 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74320 00:17:41.053 [2024-09-29 21:48:59.685752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.685815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:41.053 [2024-09-29 21:48:59.685827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:41.053 [2024-09-29 21:48:59.685835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.685856] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:41.053 [2024-09-29 21:48:59.688098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.688128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:41.053 [2024-09-29 21:48:59.688139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:17:41.053 [2024-09-29 21:48:59.688145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.688395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.688406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:41.053 [2024-09-29 21:48:59.688415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:41.053 [2024-09-29 21:48:59.688421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.691883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.691909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:41.053 [2024-09-29 21:48:59.691919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.443 ms 00:17:41.053 [2024-09-29 21:48:59.691925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.697191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.697371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:41.053 [2024-09-29 21:48:59.697401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.239 ms 00:17:41.053 [2024-09-29 21:48:59.697408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.704764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.704789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:41.053 [2024-09-29 21:48:59.704802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.304 ms 00:17:41.053 [2024-09-29 21:48:59.704808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.711824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.711946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:41.053 [2024-09-29 21:48:59.711963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.981 ms 00:17:41.053 [2024-09-29 21:48:59.711976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.712089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.712099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:41.053 [2024-09-29 21:48:59.712109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:41.053 [2024-09-29 21:48:59.712118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.719904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.719929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:41.053 [2024-09-29 21:48:59.719938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.768 ms 00:17:41.053 [2024-09-29 21:48:59.719944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.727519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.727543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:41.053 [2024-09-29 21:48:59.727556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.541 ms 00:17:41.053 [2024-09-29 21:48:59.727562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.734850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.734955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:41.053 [2024-09-29 21:48:59.734970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.255 ms 00:17:41.053 [2024-09-29 21:48:59.734975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.742181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.053 [2024-09-29 21:48:59.742284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:41.053 [2024-09-29 21:48:59.742298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.153 ms 00:17:41.053 [2024-09-29 21:48:59.742304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.053 [2024-09-29 21:48:59.742341] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:41.053 [2024-09-29 21:48:59.742354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:41.053 [2024-09-29 21:48:59.742364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:41.054 [2024-09-29 21:48:59.742922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.742993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.743001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.743007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.743015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.743022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.743029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.743035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.743042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:41.055 [2024-09-29 21:48:59.743054] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:41.055 [2024-09-29 21:48:59.743063] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1605127c-fec2-4388-9e4a-c6c135288dc4 00:17:41.055 [2024-09-29 21:48:59.743069] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:41.055 [2024-09-29 21:48:59.743078] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:41.055 [2024-09-29 21:48:59.743083] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:41.055 [2024-09-29 21:48:59.743091] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:41.055 [2024-09-29 21:48:59.743102] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:41.055 [2024-09-29 21:48:59.743111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:41.055 [2024-09-29 21:48:59.743117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:41.055 [2024-09-29 21:48:59.743123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:41.055 [2024-09-29 21:48:59.743128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:41.055 [2024-09-29 21:48:59.743135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.055 [2024-09-29 21:48:59.743141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:41.055 [2024-09-29 21:48:59.743150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:17:41.055 [2024-09-29 21:48:59.743155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.753579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.055 [2024-09-29 21:48:59.753687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:41.055 [2024-09-29 21:48:59.753772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.405 ms 00:17:41.055 [2024-09-29 21:48:59.753792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.754117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.055 [2024-09-29 21:48:59.754274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:41.055 [2024-09-29 21:48:59.754324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:17:41.055 [2024-09-29 21:48:59.754342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.786926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.787034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.055 [2024-09-29 21:48:59.787083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.787101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.787200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.787220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.055 [2024-09-29 21:48:59.787237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.787253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.787299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.787318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.055 [2024-09-29 21:48:59.787396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.787418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.787446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.787462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.055 [2024-09-29 21:48:59.787479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.787493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.849758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.849942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.055 [2024-09-29 21:48:59.849989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.850010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.901703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.901880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.055 [2024-09-29 21:48:59.901927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.901946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.902054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.902179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.055 [2024-09-29 21:48:59.902216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.902231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.902313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.902334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.055 [2024-09-29 21:48:59.902351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.902368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.902504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.902818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.055 [2024-09-29 21:48:59.902888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.902912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.903001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.903027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:41.055 [2024-09-29 21:48:59.903045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.903061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.903148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.903168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.055 [2024-09-29 21:48:59.903187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.903202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.903260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.055 [2024-09-29 21:48:59.903337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.055 [2024-09-29 21:48:59.903354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.055 [2024-09-29 21:48:59.903370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.055 [2024-09-29 21:48:59.903553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 217.778 ms, result 0 00:17:41.622 21:49:00 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:41.622 21:49:00 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:41.881 [2024-09-29 21:49:00.606377] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:41.881 [2024-09-29 21:49:00.606725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74367 ] 00:17:41.881 [2024-09-29 21:49:00.749654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.137 [2024-09-29 21:49:00.928227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.396 [2024-09-29 21:49:01.156352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.396 [2024-09-29 21:49:01.156426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.396 [2024-09-29 21:49:01.310396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.310440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:42.396 [2024-09-29 21:49:01.310455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:42.396 [2024-09-29 21:49:01.310463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.312645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.312678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.396 [2024-09-29 21:49:01.312686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.167 ms 00:17:42.396 [2024-09-29 21:49:01.312694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.312759] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:42.396 [2024-09-29 21:49:01.313274] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:42.396 [2024-09-29 21:49:01.313291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.313299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.396 [2024-09-29 21:49:01.313306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:17:42.396 [2024-09-29 21:49:01.313312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.315047] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:42.396 [2024-09-29 21:49:01.325461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.325492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:42.396 [2024-09-29 21:49:01.325504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.416 ms 00:17:42.396 [2024-09-29 21:49:01.325510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.325583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.325593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:42.396 [2024-09-29 21:49:01.325604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:42.396 [2024-09-29 21:49:01.325610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.331932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.331959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.396 [2024-09-29 21:49:01.331967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.287 ms 00:17:42.396 [2024-09-29 21:49:01.331973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.332057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.332067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.396 [2024-09-29 21:49:01.332074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:42.396 [2024-09-29 21:49:01.332081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.332104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.332111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:42.396 [2024-09-29 21:49:01.332118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:42.396 [2024-09-29 21:49:01.332124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.332143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:42.396 [2024-09-29 21:49:01.335177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.335200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.396 [2024-09-29 21:49:01.335209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:17:42.396 [2024-09-29 21:49:01.335215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.335245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.396 [2024-09-29 21:49:01.335255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:42.396 [2024-09-29 21:49:01.335262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:42.396 [2024-09-29 21:49:01.335268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.396 [2024-09-29 21:49:01.335283] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:42.396 [2024-09-29 21:49:01.335299] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:42.396 [2024-09-29 21:49:01.335328] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:42.396 [2024-09-29 21:49:01.335342] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:42.396 [2024-09-29 21:49:01.335445] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:42.396 [2024-09-29 21:49:01.335456] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:42.396 [2024-09-29 21:49:01.335466] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:42.396 [2024-09-29 21:49:01.335474] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:42.396 [2024-09-29 21:49:01.335481] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:42.396 [2024-09-29 21:49:01.335488] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:42.397 [2024-09-29 21:49:01.335494] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:42.397 [2024-09-29 21:49:01.335501] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:42.397 [2024-09-29 21:49:01.335508] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:42.397 [2024-09-29 21:49:01.335514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.397 [2024-09-29 21:49:01.335523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:42.397 [2024-09-29 21:49:01.335529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:42.397 [2024-09-29 21:49:01.335535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.397 [2024-09-29 21:49:01.335612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.397 [2024-09-29 21:49:01.335620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:42.397 [2024-09-29 21:49:01.335627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:42.397 [2024-09-29 21:49:01.335633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.397 [2024-09-29 21:49:01.335713] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:42.397 [2024-09-29 21:49:01.335722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:42.397 [2024-09-29 21:49:01.335732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.397 [2024-09-29 21:49:01.335739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:42.397 [2024-09-29 21:49:01.335751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:42.397 [2024-09-29 21:49:01.335762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:42.397 [2024-09-29 21:49:01.335768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.397 [2024-09-29 21:49:01.335779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:42.397 [2024-09-29 21:49:01.335790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:42.397 [2024-09-29 21:49:01.335795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.397 [2024-09-29 21:49:01.335800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:42.397 [2024-09-29 21:49:01.335807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:42.397 [2024-09-29 21:49:01.335812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:42.397 [2024-09-29 21:49:01.335824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:42.397 [2024-09-29 21:49:01.335829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:42.397 [2024-09-29 21:49:01.335839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.397 [2024-09-29 21:49:01.335850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:42.397 [2024-09-29 21:49:01.335855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.397 [2024-09-29 21:49:01.335866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:42.397 [2024-09-29 21:49:01.335872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.397 [2024-09-29 21:49:01.335883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:42.397 [2024-09-29 21:49:01.335890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.397 [2024-09-29 21:49:01.335901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:42.397 [2024-09-29 21:49:01.335906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.397 [2024-09-29 21:49:01.335918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:42.397 [2024-09-29 21:49:01.335923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:42.397 [2024-09-29 21:49:01.335930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.397 [2024-09-29 21:49:01.335935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:42.397 [2024-09-29 21:49:01.335941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:42.397 [2024-09-29 21:49:01.335946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:42.397 [2024-09-29 21:49:01.335956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:42.397 [2024-09-29 21:49:01.335962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335967] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:42.397 [2024-09-29 21:49:01.335974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:42.397 [2024-09-29 21:49:01.335980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.397 [2024-09-29 21:49:01.335986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.397 [2024-09-29 21:49:01.335992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:42.397 [2024-09-29 21:49:01.335998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:42.397 [2024-09-29 21:49:01.336003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:42.397 [2024-09-29 21:49:01.336009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:42.397 [2024-09-29 21:49:01.336014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:42.397 [2024-09-29 21:49:01.336020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:42.397 [2024-09-29 21:49:01.336026] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:42.397 [2024-09-29 21:49:01.336036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.397 [2024-09-29 21:49:01.336043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:42.397 [2024-09-29 21:49:01.336049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:42.397 [2024-09-29 21:49:01.336054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:42.397 [2024-09-29 21:49:01.336060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:42.397 [2024-09-29 21:49:01.336065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:42.397 [2024-09-29 21:49:01.336070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:42.397 [2024-09-29 21:49:01.336076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:42.397 [2024-09-29 21:49:01.336082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:42.397 [2024-09-29 21:49:01.336088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:42.397 [2024-09-29 21:49:01.336093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:42.397 [2024-09-29 21:49:01.336099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:42.397 [2024-09-29 21:49:01.336104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:42.397 [2024-09-29 21:49:01.336110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:42.397 [2024-09-29 21:49:01.336116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:42.397 [2024-09-29 21:49:01.336121] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:42.397 [2024-09-29 21:49:01.336127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.398 [2024-09-29 21:49:01.336134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:42.398 [2024-09-29 21:49:01.336140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:42.398 [2024-09-29 21:49:01.336145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:42.398 [2024-09-29 21:49:01.336151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:42.398 [2024-09-29 21:49:01.336156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.398 [2024-09-29 21:49:01.336164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:42.398 [2024-09-29 21:49:01.336170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:17:42.398 [2024-09-29 21:49:01.336176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.398 [2024-09-29 21:49:01.374114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.398 [2024-09-29 21:49:01.374172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.398 [2024-09-29 21:49:01.374187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.886 ms 00:17:42.398 [2024-09-29 21:49:01.374199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.398 [2024-09-29 21:49:01.374365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.398 [2024-09-29 21:49:01.374381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:42.398 [2024-09-29 21:49:01.374411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:42.398 [2024-09-29 21:49:01.374422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.656 [2024-09-29 21:49:01.401174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.656 [2024-09-29 21:49:01.401204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.656 [2024-09-29 21:49:01.401213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.709 ms 00:17:42.656 [2024-09-29 21:49:01.401220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.656 [2024-09-29 21:49:01.401273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.656 [2024-09-29 21:49:01.401282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.656 [2024-09-29 21:49:01.401289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:42.656 [2024-09-29 21:49:01.401295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.401712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.401726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.657 [2024-09-29 21:49:01.401734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:17:42.657 [2024-09-29 21:49:01.401741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.401869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.401879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.657 [2024-09-29 21:49:01.401886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:42.657 [2024-09-29 21:49:01.401892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.413304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.413513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.657 [2024-09-29 21:49:01.413527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.395 ms 00:17:42.657 [2024-09-29 21:49:01.413533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.423754] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:42.657 [2024-09-29 21:49:01.423785] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:42.657 [2024-09-29 21:49:01.423796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.423803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:42.657 [2024-09-29 21:49:01.423811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.174 ms 00:17:42.657 [2024-09-29 21:49:01.423818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.442514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.442543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:42.657 [2024-09-29 21:49:01.442556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.636 ms 00:17:42.657 [2024-09-29 21:49:01.442563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.451479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.451506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:42.657 [2024-09-29 21:49:01.451514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.860 ms 00:17:42.657 [2024-09-29 21:49:01.451521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.460488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.460622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:42.657 [2024-09-29 21:49:01.460636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.921 ms 00:17:42.657 [2024-09-29 21:49:01.460642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.461125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.461143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:42.657 [2024-09-29 21:49:01.461151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:17:42.657 [2024-09-29 21:49:01.461157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.510117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.510174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:42.657 [2024-09-29 21:49:01.510185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.939 ms 00:17:42.657 [2024-09-29 21:49:01.510192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.518695] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:42.657 [2024-09-29 21:49:01.533831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.533869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:42.657 [2024-09-29 21:49:01.533881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.544 ms 00:17:42.657 [2024-09-29 21:49:01.533888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.534062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.534072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:42.657 [2024-09-29 21:49:01.534080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:42.657 [2024-09-29 21:49:01.534087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.534153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.534164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:42.657 [2024-09-29 21:49:01.534171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:42.657 [2024-09-29 21:49:01.534178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.534196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.534202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:42.657 [2024-09-29 21:49:01.534209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:42.657 [2024-09-29 21:49:01.534216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.534246] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:42.657 [2024-09-29 21:49:01.534254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.534262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:42.657 [2024-09-29 21:49:01.534269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:42.657 [2024-09-29 21:49:01.534276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.553539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.553570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:42.657 [2024-09-29 21:49:01.553580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.246 ms 00:17:42.657 [2024-09-29 21:49:01.553587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.553669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.657 [2024-09-29 21:49:01.553679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:42.657 [2024-09-29 21:49:01.553687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:42.657 [2024-09-29 21:49:01.553694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.657 [2024-09-29 21:49:01.554486] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.657 [2024-09-29 21:49:01.556899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.813 ms, result 0 00:17:42.657 [2024-09-29 21:49:01.557575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:42.657 [2024-09-29 21:49:01.572377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.790  Copying: 45/256 [MB] (45 MBps) Copying: 88/256 [MB] (43 MBps) Copying: 131/256 [MB] (42 MBps) Copying: 173/256 [MB] (42 MBps) Copying: 217/256 [MB] (43 MBps) Copying: 256/256 [MB] (average 43 MBps)[2024-09-29 21:49:07.512355] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.790 [2024-09-29 21:49:07.522114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.522311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:48.790 [2024-09-29 21:49:07.522333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:48.790 [2024-09-29 21:49:07.522341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.522368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:48.790 [2024-09-29 21:49:07.525135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.525164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:48.790 [2024-09-29 21:49:07.525175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:17:48.790 [2024-09-29 21:49:07.525183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.525558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.525598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:48.790 [2024-09-29 21:49:07.525669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:17:48.790 [2024-09-29 21:49:07.525693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.529395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.529477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:48.790 [2024-09-29 21:49:07.529489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:17:48.790 [2024-09-29 21:49:07.529497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.536445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.536549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:48.790 [2024-09-29 21:49:07.536569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.929 ms 00:17:48.790 [2024-09-29 21:49:07.536577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.559686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.559803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:48.790 [2024-09-29 21:49:07.559819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.056 ms 00:17:48.790 [2024-09-29 21:49:07.559827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.573842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.573874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:48.790 [2024-09-29 21:49:07.573885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.984 ms 00:17:48.790 [2024-09-29 21:49:07.573893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.574026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.574038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:48.790 [2024-09-29 21:49:07.574047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:48.790 [2024-09-29 21:49:07.574054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.597223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.597263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:48.790 [2024-09-29 21:49:07.597273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.148 ms 00:17:48.790 [2024-09-29 21:49:07.597280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.619799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.619829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:48.790 [2024-09-29 21:49:07.619839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.486 ms 00:17:48.790 [2024-09-29 21:49:07.619846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.790 [2024-09-29 21:49:07.641939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.790 [2024-09-29 21:49:07.641968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:48.790 [2024-09-29 21:49:07.641977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.061 ms 00:17:48.791 [2024-09-29 21:49:07.641985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.791 [2024-09-29 21:49:07.663777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.791 [2024-09-29 21:49:07.663807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:48.791 [2024-09-29 21:49:07.663816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.735 ms 00:17:48.791 [2024-09-29 21:49:07.663823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.791 [2024-09-29 21:49:07.663855] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:48.791 [2024-09-29 21:49:07.663870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.663993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:48.791 [2024-09-29 21:49:07.664408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:48.792 [2024-09-29 21:49:07.664674] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:48.792 [2024-09-29 21:49:07.664687] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1605127c-fec2-4388-9e4a-c6c135288dc4 00:17:48.792 [2024-09-29 21:49:07.664695] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:48.792 [2024-09-29 21:49:07.664702] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:48.792 [2024-09-29 21:49:07.664709] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:48.792 [2024-09-29 21:49:07.664719] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:48.792 [2024-09-29 21:49:07.664726] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:48.792 [2024-09-29 21:49:07.664735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:48.792 [2024-09-29 21:49:07.664742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:48.792 [2024-09-29 21:49:07.664748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:48.792 [2024-09-29 21:49:07.664754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:48.792 [2024-09-29 21:49:07.664761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.792 [2024-09-29 21:49:07.664768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:48.792 [2024-09-29 21:49:07.664776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:17:48.792 [2024-09-29 21:49:07.664783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.792 [2024-09-29 21:49:07.677474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.792 [2024-09-29 21:49:07.677507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:48.792 [2024-09-29 21:49:07.677517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.674 ms 00:17:48.792 [2024-09-29 21:49:07.677525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.792 [2024-09-29 21:49:07.677891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.792 [2024-09-29 21:49:07.677910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:48.792 [2024-09-29 21:49:07.677919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:17:48.792 [2024-09-29 21:49:07.677926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.792 [2024-09-29 21:49:07.710002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.792 [2024-09-29 21:49:07.710033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.792 [2024-09-29 21:49:07.710043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.792 [2024-09-29 21:49:07.710051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.792 [2024-09-29 21:49:07.710123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.792 [2024-09-29 21:49:07.710141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.792 [2024-09-29 21:49:07.710149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.792 [2024-09-29 21:49:07.710156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.792 [2024-09-29 21:49:07.710197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.792 [2024-09-29 21:49:07.710209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.792 [2024-09-29 21:49:07.710217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.792 [2024-09-29 21:49:07.710225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.792 [2024-09-29 21:49:07.710243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.792 [2024-09-29 21:49:07.710251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.792 [2024-09-29 21:49:07.710258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.792 [2024-09-29 21:49:07.710266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.121 [2024-09-29 21:49:07.790019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.121 [2024-09-29 21:49:07.790069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.121 [2024-09-29 21:49:07.790079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.121 [2024-09-29 21:49:07.790087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.121 [2024-09-29 21:49:07.855084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.121 [2024-09-29 21:49:07.855130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.121 [2024-09-29 21:49:07.855142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.121 [2024-09-29 21:49:07.855150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.121 [2024-09-29 21:49:07.855210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.121 [2024-09-29 21:49:07.855220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.121 [2024-09-29 21:49:07.855233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.121 [2024-09-29 21:49:07.855241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.121 [2024-09-29 21:49:07.855271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.121 [2024-09-29 21:49:07.855279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.121 [2024-09-29 21:49:07.855287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.121 [2024-09-29 21:49:07.855295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.121 [2024-09-29 21:49:07.855400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.121 [2024-09-29 21:49:07.855410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.121 [2024-09-29 21:49:07.855418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.121 [2024-09-29 21:49:07.855429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.121 [2024-09-29 21:49:07.855465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.121 [2024-09-29 21:49:07.855474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.121 [2024-09-29 21:49:07.855482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.121 [2024-09-29 21:49:07.855490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.121 [2024-09-29 21:49:07.855530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.121 [2024-09-29 21:49:07.855539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.121 [2024-09-29 21:49:07.855549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.121 [2024-09-29 21:49:07.855561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.121 [2024-09-29 21:49:07.855605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.121 [2024-09-29 21:49:07.855616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.121 [2024-09-29 21:49:07.855624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.121 [2024-09-29 21:49:07.855631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.121 [2024-09-29 21:49:07.855779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.648 ms, result 0 00:17:49.705 00:17:49.705 00:17:49.705 21:49:08 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:49.966 21:49:08 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:50.535 21:49:09 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.535 [2024-09-29 21:49:09.288054] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:50.535 [2024-09-29 21:49:09.288176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74466 ] 00:17:50.535 [2024-09-29 21:49:09.439775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.796 [2024-09-29 21:49:09.653520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.057 [2024-09-29 21:49:09.925727] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.057 [2024-09-29 21:49:09.925791] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:51.319 [2024-09-29 21:49:10.081309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.319 [2024-09-29 21:49:10.081361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:51.319 [2024-09-29 21:49:10.081377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:51.319 [2024-09-29 21:49:10.081400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.319 [2024-09-29 21:49:10.084091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.319 [2024-09-29 21:49:10.084121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.319 [2024-09-29 21:49:10.084131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:17:51.319 [2024-09-29 21:49:10.084142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.319 [2024-09-29 21:49:10.084209] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:51.319 [2024-09-29 21:49:10.084865] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:51.319 [2024-09-29 21:49:10.084889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.319 [2024-09-29 21:49:10.084900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.319 [2024-09-29 21:49:10.084909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:17:51.319 [2024-09-29 21:49:10.084917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.319 [2024-09-29 21:49:10.086296] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:51.319 [2024-09-29 21:49:10.099229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.319 [2024-09-29 21:49:10.099259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:51.319 [2024-09-29 21:49:10.099271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.934 ms 00:17:51.319 [2024-09-29 21:49:10.099279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.319 [2024-09-29 21:49:10.099365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.319 [2024-09-29 21:49:10.099377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:51.319 [2024-09-29 21:49:10.099401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:51.320 [2024-09-29 21:49:10.099410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.320 [2024-09-29 21:49:10.105847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.320 [2024-09-29 21:49:10.105872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.320 [2024-09-29 21:49:10.105882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.393 ms 00:17:51.320 [2024-09-29 21:49:10.105890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.320 [2024-09-29 21:49:10.105984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.320 [2024-09-29 21:49:10.105996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.320 [2024-09-29 21:49:10.106005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:51.320 [2024-09-29 21:49:10.106013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.320 [2024-09-29 21:49:10.106038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.320 [2024-09-29 21:49:10.106046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:51.320 [2024-09-29 21:49:10.106055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:51.320 [2024-09-29 21:49:10.106063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.320 [2024-09-29 21:49:10.106083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:51.320 [2024-09-29 21:49:10.109544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.320 [2024-09-29 21:49:10.109569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.320 [2024-09-29 21:49:10.109579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.466 ms 00:17:51.320 [2024-09-29 21:49:10.109586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.320 [2024-09-29 21:49:10.109630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.320 [2024-09-29 21:49:10.109642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:51.320 [2024-09-29 21:49:10.109650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:51.320 [2024-09-29 21:49:10.109659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.320 [2024-09-29 21:49:10.109679] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:51.320 [2024-09-29 21:49:10.109699] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:51.320 [2024-09-29 21:49:10.109736] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:51.320 [2024-09-29 21:49:10.109751] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:51.320 [2024-09-29 21:49:10.109858] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:51.320 [2024-09-29 21:49:10.109869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:51.320 [2024-09-29 21:49:10.109880] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:51.320 [2024-09-29 21:49:10.109891] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:51.320 [2024-09-29 21:49:10.109900] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:51.320 [2024-09-29 21:49:10.109908] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:51.320 [2024-09-29 21:49:10.109915] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:51.320 [2024-09-29 21:49:10.109922] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:51.320 [2024-09-29 21:49:10.109931] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:51.320 [2024-09-29 21:49:10.109940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.320 [2024-09-29 21:49:10.109949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:51.320 [2024-09-29 21:49:10.109957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:51.320 [2024-09-29 21:49:10.109965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.320 [2024-09-29 21:49:10.110063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.320 [2024-09-29 21:49:10.110073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:51.320 [2024-09-29 21:49:10.110080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:51.320 [2024-09-29 21:49:10.110088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.320 [2024-09-29 21:49:10.110196] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:51.320 [2024-09-29 21:49:10.110207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:51.320 [2024-09-29 21:49:10.110219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.320 [2024-09-29 21:49:10.110228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:51.320 [2024-09-29 21:49:10.110243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:51.320 [2024-09-29 21:49:10.110258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:51.320 [2024-09-29 21:49:10.110265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.320 [2024-09-29 21:49:10.110279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:51.320 [2024-09-29 21:49:10.110293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:51.320 [2024-09-29 21:49:10.110300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:51.320 [2024-09-29 21:49:10.110307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:51.320 [2024-09-29 21:49:10.110314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:51.320 [2024-09-29 21:49:10.110320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:51.320 [2024-09-29 21:49:10.110334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:51.320 [2024-09-29 21:49:10.110341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:51.320 [2024-09-29 21:49:10.110355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.320 [2024-09-29 21:49:10.110370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:51.320 [2024-09-29 21:49:10.110377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.320 [2024-09-29 21:49:10.110405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:51.320 [2024-09-29 21:49:10.110411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.320 [2024-09-29 21:49:10.110425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:51.320 [2024-09-29 21:49:10.110432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:51.320 [2024-09-29 21:49:10.110446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:51.320 [2024-09-29 21:49:10.110452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.320 [2024-09-29 21:49:10.110466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:51.320 [2024-09-29 21:49:10.110473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:51.320 [2024-09-29 21:49:10.110480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:51.320 [2024-09-29 21:49:10.110486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:51.320 [2024-09-29 21:49:10.110493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:51.320 [2024-09-29 21:49:10.110500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:51.320 [2024-09-29 21:49:10.110514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:51.320 [2024-09-29 21:49:10.110522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110529] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:51.320 [2024-09-29 21:49:10.110537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:51.320 [2024-09-29 21:49:10.110544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:51.320 [2024-09-29 21:49:10.110551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:51.320 [2024-09-29 21:49:10.110560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:51.320 [2024-09-29 21:49:10.110567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:51.320 [2024-09-29 21:49:10.110573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:51.320 [2024-09-29 21:49:10.110581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:51.320 [2024-09-29 21:49:10.110587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:51.320 [2024-09-29 21:49:10.110594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:51.320 [2024-09-29 21:49:10.110604] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:51.320 [2024-09-29 21:49:10.110617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.320 [2024-09-29 21:49:10.110625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:51.320 [2024-09-29 21:49:10.110633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:51.320 [2024-09-29 21:49:10.110640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:51.320 [2024-09-29 21:49:10.110648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:51.320 [2024-09-29 21:49:10.110655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:51.320 [2024-09-29 21:49:10.110662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:51.321 [2024-09-29 21:49:10.110669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:51.321 [2024-09-29 21:49:10.110676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:51.321 [2024-09-29 21:49:10.110684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:51.321 [2024-09-29 21:49:10.110691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:51.321 [2024-09-29 21:49:10.110698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:51.321 [2024-09-29 21:49:10.110705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:51.321 [2024-09-29 21:49:10.110713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:51.321 [2024-09-29 21:49:10.110721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:51.321 [2024-09-29 21:49:10.110728] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:51.321 [2024-09-29 21:49:10.110735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:51.321 [2024-09-29 21:49:10.110744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:51.321 [2024-09-29 21:49:10.110752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:51.321 [2024-09-29 21:49:10.110759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:51.321 [2024-09-29 21:49:10.110766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:51.321 [2024-09-29 21:49:10.110773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.110783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:51.321 [2024-09-29 21:49:10.110791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:17:51.321 [2024-09-29 21:49:10.110798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.144094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.144134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.321 [2024-09-29 21:49:10.144146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.244 ms 00:17:51.321 [2024-09-29 21:49:10.144155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.144294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.144307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:51.321 [2024-09-29 21:49:10.144317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:51.321 [2024-09-29 21:49:10.144325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.176924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.176954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.321 [2024-09-29 21:49:10.176964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.578 ms 00:17:51.321 [2024-09-29 21:49:10.176972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.177054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.177065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.321 [2024-09-29 21:49:10.177074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:51.321 [2024-09-29 21:49:10.177081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.177505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.177525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.321 [2024-09-29 21:49:10.177535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:17:51.321 [2024-09-29 21:49:10.177543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.177679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.177689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.321 [2024-09-29 21:49:10.177698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:51.321 [2024-09-29 21:49:10.177706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.191485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.191509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.321 [2024-09-29 21:49:10.191519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.756 ms 00:17:51.321 [2024-09-29 21:49:10.191527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.204261] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:51.321 [2024-09-29 21:49:10.204294] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:51.321 [2024-09-29 21:49:10.204305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.204313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:51.321 [2024-09-29 21:49:10.204322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.678 ms 00:17:51.321 [2024-09-29 21:49:10.204330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.228662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.228692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:51.321 [2024-09-29 21:49:10.228708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.252 ms 00:17:51.321 [2024-09-29 21:49:10.228716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.240076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.240101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:51.321 [2024-09-29 21:49:10.240110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.294 ms 00:17:51.321 [2024-09-29 21:49:10.240118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.251375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.251409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:51.321 [2024-09-29 21:49:10.251418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.196 ms 00:17:51.321 [2024-09-29 21:49:10.251426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.321 [2024-09-29 21:49:10.252022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.321 [2024-09-29 21:49:10.252044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:51.321 [2024-09-29 21:49:10.252053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:17:51.321 [2024-09-29 21:49:10.252062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.582 [2024-09-29 21:49:10.311074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.582 [2024-09-29 21:49:10.311117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:51.582 [2024-09-29 21:49:10.311129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.988 ms 00:17:51.582 [2024-09-29 21:49:10.311138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.582 [2024-09-29 21:49:10.322016] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:51.582 [2024-09-29 21:49:10.338645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.582 [2024-09-29 21:49:10.338680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:51.582 [2024-09-29 21:49:10.338692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.400 ms 00:17:51.582 [2024-09-29 21:49:10.338700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.582 [2024-09-29 21:49:10.338795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.582 [2024-09-29 21:49:10.338807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:51.582 [2024-09-29 21:49:10.338817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:51.582 [2024-09-29 21:49:10.338825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.582 [2024-09-29 21:49:10.338882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.582 [2024-09-29 21:49:10.338894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:51.582 [2024-09-29 21:49:10.338902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:51.582 [2024-09-29 21:49:10.338910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.582 [2024-09-29 21:49:10.338931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.582 [2024-09-29 21:49:10.338941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:51.582 [2024-09-29 21:49:10.338949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:51.582 [2024-09-29 21:49:10.338957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.582 [2024-09-29 21:49:10.338990] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:51.582 [2024-09-29 21:49:10.339001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.582 [2024-09-29 21:49:10.339011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:51.582 [2024-09-29 21:49:10.339019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:51.582 [2024-09-29 21:49:10.339027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.582 [2024-09-29 21:49:10.362931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.582 [2024-09-29 21:49:10.362961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:51.582 [2024-09-29 21:49:10.362972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.884 ms 00:17:51.582 [2024-09-29 21:49:10.362981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.582 [2024-09-29 21:49:10.363076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.582 [2024-09-29 21:49:10.363088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:51.582 [2024-09-29 21:49:10.363098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:51.582 [2024-09-29 21:49:10.363106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.582 [2024-09-29 21:49:10.364000] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.582 [2024-09-29 21:49:10.367035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 282.382 ms, result 0 00:17:51.582 [2024-09-29 21:49:10.367643] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.582 [2024-09-29 21:49:10.380426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.582  Copying: 4096/4096 [kB] (average 40 MBps)[2024-09-29 21:49:10.482743] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.582 [2024-09-29 21:49:10.491171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.582 [2024-09-29 21:49:10.491201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:51.583 [2024-09-29 21:49:10.491213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:51.583 [2024-09-29 21:49:10.491222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.583 [2024-09-29 21:49:10.491242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:51.583 [2024-09-29 21:49:10.494007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.583 [2024-09-29 21:49:10.494031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:51.583 [2024-09-29 21:49:10.494041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:17:51.583 [2024-09-29 21:49:10.494049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.583 [2024-09-29 21:49:10.495947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.583 [2024-09-29 21:49:10.495990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:51.583 [2024-09-29 21:49:10.496001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:17:51.583 [2024-09-29 21:49:10.496010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.583 [2024-09-29 21:49:10.499919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.583 [2024-09-29 21:49:10.499941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:51.583 [2024-09-29 21:49:10.499950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.894 ms 00:17:51.583 [2024-09-29 21:49:10.499958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.583 [2024-09-29 21:49:10.506844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.583 [2024-09-29 21:49:10.506879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:51.583 [2024-09-29 21:49:10.506893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.861 ms 00:17:51.583 [2024-09-29 21:49:10.506902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.583 [2024-09-29 21:49:10.529480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.583 [2024-09-29 21:49:10.529508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:51.583 [2024-09-29 21:49:10.529518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.520 ms 00:17:51.583 [2024-09-29 21:49:10.529525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.583 [2024-09-29 21:49:10.544013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.583 [2024-09-29 21:49:10.544041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:51.583 [2024-09-29 21:49:10.544053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.455 ms 00:17:51.583 [2024-09-29 21:49:10.544062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.583 [2024-09-29 21:49:10.544191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.583 [2024-09-29 21:49:10.544202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:51.583 [2024-09-29 21:49:10.544211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:51.583 [2024-09-29 21:49:10.544219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.843 [2024-09-29 21:49:10.567536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.843 [2024-09-29 21:49:10.567562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:51.843 [2024-09-29 21:49:10.567572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.295 ms 00:17:51.843 [2024-09-29 21:49:10.567579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.843 [2024-09-29 21:49:10.590208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.843 [2024-09-29 21:49:10.590233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:51.843 [2024-09-29 21:49:10.590243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.595 ms 00:17:51.843 [2024-09-29 21:49:10.590250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.843 [2024-09-29 21:49:10.612435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.843 [2024-09-29 21:49:10.612469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:51.843 [2024-09-29 21:49:10.612480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.153 ms 00:17:51.843 [2024-09-29 21:49:10.612487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.843 [2024-09-29 21:49:10.634860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.843 [2024-09-29 21:49:10.634886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:51.843 [2024-09-29 21:49:10.634895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.316 ms 00:17:51.843 [2024-09-29 21:49:10.634903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.843 [2024-09-29 21:49:10.634936] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:51.843 [2024-09-29 21:49:10.634950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.634960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.634968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.634976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.634984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.634992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:51.843 [2024-09-29 21:49:10.635491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:51.844 [2024-09-29 21:49:10.635744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:51.844 [2024-09-29 21:49:10.635752] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1605127c-fec2-4388-9e4a-c6c135288dc4 00:17:51.844 [2024-09-29 21:49:10.635761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:51.844 [2024-09-29 21:49:10.635768] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:51.844 [2024-09-29 21:49:10.635778] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:51.844 [2024-09-29 21:49:10.635786] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:51.844 [2024-09-29 21:49:10.635794] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:51.844 [2024-09-29 21:49:10.635802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:51.844 [2024-09-29 21:49:10.635809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:51.844 [2024-09-29 21:49:10.635815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:51.844 [2024-09-29 21:49:10.635821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:51.844 [2024-09-29 21:49:10.635828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.844 [2024-09-29 21:49:10.635836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:51.844 [2024-09-29 21:49:10.635844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:17:51.844 [2024-09-29 21:49:10.635852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.844 [2024-09-29 21:49:10.648365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.844 [2024-09-29 21:49:10.648407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:51.844 [2024-09-29 21:49:10.648418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.487 ms 00:17:51.844 [2024-09-29 21:49:10.648426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.844 [2024-09-29 21:49:10.648790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.844 [2024-09-29 21:49:10.648806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:51.844 [2024-09-29 21:49:10.648814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:17:51.844 [2024-09-29 21:49:10.648822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.844 [2024-09-29 21:49:10.681024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.844 [2024-09-29 21:49:10.681054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.844 [2024-09-29 21:49:10.681065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.844 [2024-09-29 21:49:10.681072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.844 [2024-09-29 21:49:10.681148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.844 [2024-09-29 21:49:10.681158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.844 [2024-09-29 21:49:10.681166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.844 [2024-09-29 21:49:10.681174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.844 [2024-09-29 21:49:10.681214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.844 [2024-09-29 21:49:10.681226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.844 [2024-09-29 21:49:10.681234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.844 [2024-09-29 21:49:10.681241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.844 [2024-09-29 21:49:10.681257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.844 [2024-09-29 21:49:10.681265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.844 [2024-09-29 21:49:10.681273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.844 [2024-09-29 21:49:10.681280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.844 [2024-09-29 21:49:10.763004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.844 [2024-09-29 21:49:10.763055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.844 [2024-09-29 21:49:10.763069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.844 [2024-09-29 21:49:10.763077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.102 [2024-09-29 21:49:10.829439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.102 [2024-09-29 21:49:10.829488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.102 [2024-09-29 21:49:10.829501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.102 [2024-09-29 21:49:10.829508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.102 [2024-09-29 21:49:10.829573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.102 [2024-09-29 21:49:10.829583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.102 [2024-09-29 21:49:10.829596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.102 [2024-09-29 21:49:10.829603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.102 [2024-09-29 21:49:10.829635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.102 [2024-09-29 21:49:10.829643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.102 [2024-09-29 21:49:10.829651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.102 [2024-09-29 21:49:10.829659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.102 [2024-09-29 21:49:10.829753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.102 [2024-09-29 21:49:10.829764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.102 [2024-09-29 21:49:10.829773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.102 [2024-09-29 21:49:10.829783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.102 [2024-09-29 21:49:10.829813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.102 [2024-09-29 21:49:10.829824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.102 [2024-09-29 21:49:10.829832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.103 [2024-09-29 21:49:10.829840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.103 [2024-09-29 21:49:10.829881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.103 [2024-09-29 21:49:10.829891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.103 [2024-09-29 21:49:10.829899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.103 [2024-09-29 21:49:10.829910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.103 [2024-09-29 21:49:10.829958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.103 [2024-09-29 21:49:10.829968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.103 [2024-09-29 21:49:10.829976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.103 [2024-09-29 21:49:10.829984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.103 [2024-09-29 21:49:10.830131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.938 ms, result 0 00:17:52.668 00:17:52.668 00:17:52.926 21:49:11 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74491 00:17:52.926 21:49:11 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:52.926 21:49:11 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74491 00:17:52.926 21:49:11 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74491 ']' 00:17:52.926 21:49:11 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.926 21:49:11 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.926 21:49:11 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.926 21:49:11 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.926 21:49:11 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:52.926 [2024-09-29 21:49:11.731231] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:52.926 [2024-09-29 21:49:11.731353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74491 ] 00:17:52.926 [2024-09-29 21:49:11.880319] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.184 [2024-09-29 21:49:12.083125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.747 21:49:12 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.747 21:49:12 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:53.747 21:49:12 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:54.004 [2024-09-29 21:49:12.932632] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.004 [2024-09-29 21:49:12.932707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.264 [2024-09-29 21:49:13.104562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.104630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.264 [2024-09-29 21:49:13.104646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.264 [2024-09-29 21:49:13.104657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.107425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.107457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.264 [2024-09-29 21:49:13.107470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.749 ms 00:17:54.264 [2024-09-29 21:49:13.107478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.107548] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.264 [2024-09-29 21:49:13.108235] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.264 [2024-09-29 21:49:13.108265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.108274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.264 [2024-09-29 21:49:13.108284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:17:54.264 [2024-09-29 21:49:13.108292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.109688] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:54.264 [2024-09-29 21:49:13.122326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.122369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:54.264 [2024-09-29 21:49:13.122381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.642 ms 00:17:54.264 [2024-09-29 21:49:13.122409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.122496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.122511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:54.264 [2024-09-29 21:49:13.122520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:54.264 [2024-09-29 21:49:13.122529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.128889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.128924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.264 [2024-09-29 21:49:13.128934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.297 ms 00:17:54.264 [2024-09-29 21:49:13.128945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.129048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.129061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.264 [2024-09-29 21:49:13.129071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:54.264 [2024-09-29 21:49:13.129082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.129107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.129117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.264 [2024-09-29 21:49:13.129126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:54.264 [2024-09-29 21:49:13.129135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.129156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:54.264 [2024-09-29 21:49:13.132675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.132701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.264 [2024-09-29 21:49:13.132713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.521 ms 00:17:54.264 [2024-09-29 21:49:13.132723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.132779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.132787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.264 [2024-09-29 21:49:13.132797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:54.264 [2024-09-29 21:49:13.132805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.132827] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:54.264 [2024-09-29 21:49:13.132846] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:54.264 [2024-09-29 21:49:13.132888] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:54.264 [2024-09-29 21:49:13.132906] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:54.264 [2024-09-29 21:49:13.133014] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.264 [2024-09-29 21:49:13.133026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.264 [2024-09-29 21:49:13.133039] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:54.264 [2024-09-29 21:49:13.133049] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.264 [2024-09-29 21:49:13.133059] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.264 [2024-09-29 21:49:13.133069] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:54.264 [2024-09-29 21:49:13.133077] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.264 [2024-09-29 21:49:13.133084] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.264 [2024-09-29 21:49:13.133095] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.264 [2024-09-29 21:49:13.133105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.133113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.264 [2024-09-29 21:49:13.133121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:17:54.264 [2024-09-29 21:49:13.133130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.133229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.264 [2024-09-29 21:49:13.133240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.264 [2024-09-29 21:49:13.133247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:54.264 [2024-09-29 21:49:13.133257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.264 [2024-09-29 21:49:13.133358] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.264 [2024-09-29 21:49:13.133373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.264 [2024-09-29 21:49:13.133381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.264 [2024-09-29 21:49:13.133405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.264 [2024-09-29 21:49:13.133412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.264 [2024-09-29 21:49:13.133421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.264 [2024-09-29 21:49:13.133427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:54.264 [2024-09-29 21:49:13.133442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.264 [2024-09-29 21:49:13.133450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.265 [2024-09-29 21:49:13.133465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.265 [2024-09-29 21:49:13.133473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:54.265 [2024-09-29 21:49:13.133480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.265 [2024-09-29 21:49:13.133489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.265 [2024-09-29 21:49:13.133496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:54.265 [2024-09-29 21:49:13.133504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.265 [2024-09-29 21:49:13.133521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:54.265 [2024-09-29 21:49:13.133533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.265 [2024-09-29 21:49:13.133548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.265 [2024-09-29 21:49:13.133564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.265 [2024-09-29 21:49:13.133584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.265 [2024-09-29 21:49:13.133601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.265 [2024-09-29 21:49:13.133608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.265 [2024-09-29 21:49:13.133622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.265 [2024-09-29 21:49:13.133631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.265 [2024-09-29 21:49:13.133648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.265 [2024-09-29 21:49:13.133654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.265 [2024-09-29 21:49:13.133669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.265 [2024-09-29 21:49:13.133677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:54.265 [2024-09-29 21:49:13.133684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.265 [2024-09-29 21:49:13.133692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.265 [2024-09-29 21:49:13.133699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:54.265 [2024-09-29 21:49:13.133709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.265 [2024-09-29 21:49:13.133724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:54.265 [2024-09-29 21:49:13.133731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133739] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.265 [2024-09-29 21:49:13.133747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.265 [2024-09-29 21:49:13.133755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.265 [2024-09-29 21:49:13.133762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.265 [2024-09-29 21:49:13.133773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.265 [2024-09-29 21:49:13.133779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.265 [2024-09-29 21:49:13.133787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.265 [2024-09-29 21:49:13.133794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.265 [2024-09-29 21:49:13.133803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.265 [2024-09-29 21:49:13.133809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.265 [2024-09-29 21:49:13.133818] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.265 [2024-09-29 21:49:13.133827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.265 [2024-09-29 21:49:13.133858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:54.265 [2024-09-29 21:49:13.133869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:54.265 [2024-09-29 21:49:13.133880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:54.265 [2024-09-29 21:49:13.133888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:54.265 [2024-09-29 21:49:13.133897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:54.265 [2024-09-29 21:49:13.133905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:54.265 [2024-09-29 21:49:13.133913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:54.265 [2024-09-29 21:49:13.133920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:54.265 [2024-09-29 21:49:13.133929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:54.265 [2024-09-29 21:49:13.133937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:54.265 [2024-09-29 21:49:13.133945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:54.265 [2024-09-29 21:49:13.133953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:54.265 [2024-09-29 21:49:13.133962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:54.265 [2024-09-29 21:49:13.133969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:54.265 [2024-09-29 21:49:13.133978] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.265 [2024-09-29 21:49:13.133986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.265 [2024-09-29 21:49:13.133999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.265 [2024-09-29 21:49:13.134007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.265 [2024-09-29 21:49:13.134015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.265 [2024-09-29 21:49:13.134022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.265 [2024-09-29 21:49:13.134031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.265 [2024-09-29 21:49:13.134038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.265 [2024-09-29 21:49:13.134048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:17:54.265 [2024-09-29 21:49:13.134055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.265 [2024-09-29 21:49:13.162770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.265 [2024-09-29 21:49:13.162808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.265 [2024-09-29 21:49:13.162821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.654 ms 00:17:54.265 [2024-09-29 21:49:13.162830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.265 [2024-09-29 21:49:13.162947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.265 [2024-09-29 21:49:13.162958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.265 [2024-09-29 21:49:13.162968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:54.265 [2024-09-29 21:49:13.162976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.265 [2024-09-29 21:49:13.204180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.265 [2024-09-29 21:49:13.204436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.265 [2024-09-29 21:49:13.204471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.175 ms 00:17:54.265 [2024-09-29 21:49:13.204486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.265 [2024-09-29 21:49:13.204611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.265 [2024-09-29 21:49:13.204628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.265 [2024-09-29 21:49:13.204648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.265 [2024-09-29 21:49:13.204659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.265 [2024-09-29 21:49:13.205127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.265 [2024-09-29 21:49:13.205147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.265 [2024-09-29 21:49:13.205164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:17:54.265 [2024-09-29 21:49:13.205175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.265 [2024-09-29 21:49:13.205365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.265 [2024-09-29 21:49:13.205378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.265 [2024-09-29 21:49:13.205415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:17:54.265 [2024-09-29 21:49:13.205429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.265 [2024-09-29 21:49:13.222770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.265 [2024-09-29 21:49:13.222800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.265 [2024-09-29 21:49:13.222812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.308 ms 00:17:54.265 [2024-09-29 21:49:13.222822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.265 [2024-09-29 21:49:13.235574] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:54.266 [2024-09-29 21:49:13.235712] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.266 [2024-09-29 21:49:13.235730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.266 [2024-09-29 21:49:13.235739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.266 [2024-09-29 21:49:13.235750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.803 ms 00:17:54.266 [2024-09-29 21:49:13.235758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.260095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.260128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.524 [2024-09-29 21:49:13.260140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.270 ms 00:17:54.524 [2024-09-29 21:49:13.260154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.271666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.271695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.524 [2024-09-29 21:49:13.271710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.442 ms 00:17:54.524 [2024-09-29 21:49:13.271718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.282692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.282809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.524 [2024-09-29 21:49:13.282827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.912 ms 00:17:54.524 [2024-09-29 21:49:13.282835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.283463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.283481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.524 [2024-09-29 21:49:13.283493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:17:54.524 [2024-09-29 21:49:13.283504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.341766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.341948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.524 [2024-09-29 21:49:13.341972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.237 ms 00:17:54.524 [2024-09-29 21:49:13.341984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.352498] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.524 [2024-09-29 21:49:13.368895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.368940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.524 [2024-09-29 21:49:13.368953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.821 ms 00:17:54.524 [2024-09-29 21:49:13.368964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.369054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.369068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.524 [2024-09-29 21:49:13.369076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:54.524 [2024-09-29 21:49:13.369086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.369143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.369155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.524 [2024-09-29 21:49:13.369164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:54.524 [2024-09-29 21:49:13.369175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.369200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.369210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.524 [2024-09-29 21:49:13.369221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.524 [2024-09-29 21:49:13.369235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.369269] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.524 [2024-09-29 21:49:13.369285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.369293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.524 [2024-09-29 21:49:13.369302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:54.524 [2024-09-29 21:49:13.369309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.393084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.393119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.524 [2024-09-29 21:49:13.393133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.749 ms 00:17:54.524 [2024-09-29 21:49:13.393143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.393233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.524 [2024-09-29 21:49:13.393244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.524 [2024-09-29 21:49:13.393255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:54.524 [2024-09-29 21:49:13.393265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.524 [2024-09-29 21:49:13.394172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.524 [2024-09-29 21:49:13.397161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.276 ms, result 0 00:17:54.524 [2024-09-29 21:49:13.398171] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.524 Some configs were skipped because the RPC state that can call them passed over. 00:17:54.524 21:49:13 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:54.782 [2024-09-29 21:49:13.624746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.782 [2024-09-29 21:49:13.624946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:54.782 [2024-09-29 21:49:13.625002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:17:54.782 [2024-09-29 21:49:13.625028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.782 [2024-09-29 21:49:13.625078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.068 ms, result 0 00:17:54.782 true 00:17:54.782 21:49:13 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:55.041 [2024-09-29 21:49:13.828607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.041 [2024-09-29 21:49:13.828758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:55.041 [2024-09-29 21:49:13.828842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:17:55.041 [2024-09-29 21:49:13.828865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.041 [2024-09-29 21:49:13.828944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.704 ms, result 0 00:17:55.041 true 00:17:55.041 21:49:13 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74491 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74491 ']' 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74491 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74491 00:17:55.041 killing process with pid 74491 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74491' 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74491 00:17:55.041 21:49:13 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74491 00:17:55.608 [2024-09-29 21:49:14.495740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.495802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:55.608 [2024-09-29 21:49:14.495813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:55.608 [2024-09-29 21:49:14.495821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.495841] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:55.608 [2024-09-29 21:49:14.497959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.497985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:55.608 [2024-09-29 21:49:14.497999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:17:55.608 [2024-09-29 21:49:14.498006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.498275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.498285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:55.608 [2024-09-29 21:49:14.498296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:17:55.608 [2024-09-29 21:49:14.498303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.501541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.501565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:55.608 [2024-09-29 21:49:14.501574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:17:55.608 [2024-09-29 21:49:14.501580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.506827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.507011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:55.608 [2024-09-29 21:49:14.507032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.217 ms 00:17:55.608 [2024-09-29 21:49:14.507041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.514839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.514867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:55.608 [2024-09-29 21:49:14.514879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.731 ms 00:17:55.608 [2024-09-29 21:49:14.514886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.521607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.521728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:55.608 [2024-09-29 21:49:14.521743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:17:55.608 [2024-09-29 21:49:14.521757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.521870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.521879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:55.608 [2024-09-29 21:49:14.521888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:55.608 [2024-09-29 21:49:14.521896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.530091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.530116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:55.608 [2024-09-29 21:49:14.530126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.175 ms 00:17:55.608 [2024-09-29 21:49:14.530132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.537508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.537532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:55.608 [2024-09-29 21:49:14.537545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.338 ms 00:17:55.608 [2024-09-29 21:49:14.537550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.544315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.544340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:55.608 [2024-09-29 21:49:14.544348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.726 ms 00:17:55.608 [2024-09-29 21:49:14.544354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.551480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.608 [2024-09-29 21:49:14.551584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:55.608 [2024-09-29 21:49:14.551598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.060 ms 00:17:55.608 [2024-09-29 21:49:14.551604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.608 [2024-09-29 21:49:14.551631] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:55.608 [2024-09-29 21:49:14.551645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:55.608 [2024-09-29 21:49:14.551909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.551995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:55.609 [2024-09-29 21:49:14.552324] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:55.609 [2024-09-29 21:49:14.552333] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1605127c-fec2-4388-9e4a-c6c135288dc4 00:17:55.609 [2024-09-29 21:49:14.552339] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:55.609 [2024-09-29 21:49:14.552347] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:55.609 [2024-09-29 21:49:14.552352] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:55.609 [2024-09-29 21:49:14.552360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:55.609 [2024-09-29 21:49:14.552370] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:55.609 [2024-09-29 21:49:14.552378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:55.609 [2024-09-29 21:49:14.552399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:55.609 [2024-09-29 21:49:14.552406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:55.609 [2024-09-29 21:49:14.552411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:55.609 [2024-09-29 21:49:14.552418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.609 [2024-09-29 21:49:14.552425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:55.609 [2024-09-29 21:49:14.552433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:17:55.609 [2024-09-29 21:49:14.552439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.609 [2024-09-29 21:49:14.562603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.609 [2024-09-29 21:49:14.562703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:55.609 [2024-09-29 21:49:14.562719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.146 ms 00:17:55.609 [2024-09-29 21:49:14.562726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.609 [2024-09-29 21:49:14.563039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.609 [2024-09-29 21:49:14.563053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:55.609 [2024-09-29 21:49:14.563061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:55.609 [2024-09-29 21:49:14.563067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.867 [2024-09-29 21:49:14.595773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.867 [2024-09-29 21:49:14.595804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.867 [2024-09-29 21:49:14.595814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.867 [2024-09-29 21:49:14.595822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.867 [2024-09-29 21:49:14.595911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.867 [2024-09-29 21:49:14.595920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.867 [2024-09-29 21:49:14.595929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.867 [2024-09-29 21:49:14.595935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.867 [2024-09-29 21:49:14.595975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.595982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.868 [2024-09-29 21:49:14.595992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.595998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.596016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.596023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.868 [2024-09-29 21:49:14.596030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.596036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.659334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.659377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.868 [2024-09-29 21:49:14.659498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.659511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.710577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.710626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.868 [2024-09-29 21:49:14.710637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.710644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.711698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.711723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.868 [2024-09-29 21:49:14.711735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.711741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.711771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.711781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.868 [2024-09-29 21:49:14.711789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.711795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.711878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.711887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.868 [2024-09-29 21:49:14.711895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.711902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.711934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.711943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:55.868 [2024-09-29 21:49:14.711953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.711959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.711996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.712004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.868 [2024-09-29 21:49:14.712014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.712021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.712064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.868 [2024-09-29 21:49:14.712073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.868 [2024-09-29 21:49:14.712082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.868 [2024-09-29 21:49:14.712090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.868 [2024-09-29 21:49:14.712214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 216.455 ms, result 0 00:17:56.805 21:49:15 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:56.805 [2024-09-29 21:49:15.578628] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:56.805 [2024-09-29 21:49:15.578731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74544 ] 00:17:56.805 [2024-09-29 21:49:15.724068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.065 [2024-09-29 21:49:15.926340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.325 [2024-09-29 21:49:16.198222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:57.325 [2024-09-29 21:49:16.198287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:57.587 [2024-09-29 21:49:16.354007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.587 [2024-09-29 21:49:16.354055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:57.587 [2024-09-29 21:49:16.354072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:57.587 [2024-09-29 21:49:16.354082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.587 [2024-09-29 21:49:16.356836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.587 [2024-09-29 21:49:16.357029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.587 [2024-09-29 21:49:16.357046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.735 ms 00:17:57.587 [2024-09-29 21:49:16.357059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.587 [2024-09-29 21:49:16.357452] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:57.587 [2024-09-29 21:49:16.358169] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:57.587 [2024-09-29 21:49:16.358203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.587 [2024-09-29 21:49:16.358216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.587 [2024-09-29 21:49:16.358226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:17:57.587 [2024-09-29 21:49:16.358234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.587 [2024-09-29 21:49:16.359653] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:57.587 [2024-09-29 21:49:16.372446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.587 [2024-09-29 21:49:16.372479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:57.587 [2024-09-29 21:49:16.372491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.794 ms 00:17:57.587 [2024-09-29 21:49:16.372500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.587 [2024-09-29 21:49:16.372582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.587 [2024-09-29 21:49:16.372594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:57.587 [2024-09-29 21:49:16.372606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:57.587 [2024-09-29 21:49:16.372614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.587 [2024-09-29 21:49:16.379209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.587 [2024-09-29 21:49:16.379239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.587 [2024-09-29 21:49:16.379248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.553 ms 00:17:57.587 [2024-09-29 21:49:16.379256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.587 [2024-09-29 21:49:16.379353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.587 [2024-09-29 21:49:16.379365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.587 [2024-09-29 21:49:16.379373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:57.587 [2024-09-29 21:49:16.379381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.587 [2024-09-29 21:49:16.379423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.587 [2024-09-29 21:49:16.379432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:57.587 [2024-09-29 21:49:16.379441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:57.587 [2024-09-29 21:49:16.379448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.587 [2024-09-29 21:49:16.379468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:57.587 [2024-09-29 21:49:16.383120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.587 [2024-09-29 21:49:16.383346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.587 [2024-09-29 21:49:16.383365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:17:57.587 [2024-09-29 21:49:16.383374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.588 [2024-09-29 21:49:16.383444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.588 [2024-09-29 21:49:16.383458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:57.588 [2024-09-29 21:49:16.383466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:57.588 [2024-09-29 21:49:16.383474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.588 [2024-09-29 21:49:16.383492] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:57.588 [2024-09-29 21:49:16.383512] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:57.588 [2024-09-29 21:49:16.383548] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:57.588 [2024-09-29 21:49:16.383565] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:57.588 [2024-09-29 21:49:16.383674] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:57.588 [2024-09-29 21:49:16.383686] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:57.588 [2024-09-29 21:49:16.383697] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:57.588 [2024-09-29 21:49:16.383707] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:57.588 [2024-09-29 21:49:16.383716] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:57.588 [2024-09-29 21:49:16.383725] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:57.588 [2024-09-29 21:49:16.383732] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:57.588 [2024-09-29 21:49:16.383740] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:57.588 [2024-09-29 21:49:16.383748] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:57.588 [2024-09-29 21:49:16.383756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.588 [2024-09-29 21:49:16.383767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:57.588 [2024-09-29 21:49:16.383774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:17:57.588 [2024-09-29 21:49:16.383781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.588 [2024-09-29 21:49:16.383873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.588 [2024-09-29 21:49:16.383882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:57.588 [2024-09-29 21:49:16.383890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:57.588 [2024-09-29 21:49:16.383898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.588 [2024-09-29 21:49:16.383997] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:57.588 [2024-09-29 21:49:16.384008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:57.588 [2024-09-29 21:49:16.384018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.588 [2024-09-29 21:49:16.384026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:57.588 [2024-09-29 21:49:16.384042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:57.588 [2024-09-29 21:49:16.384056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:57.588 [2024-09-29 21:49:16.384064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.588 [2024-09-29 21:49:16.384079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:57.588 [2024-09-29 21:49:16.384092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:57.588 [2024-09-29 21:49:16.384099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.588 [2024-09-29 21:49:16.384106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:57.588 [2024-09-29 21:49:16.384114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:57.588 [2024-09-29 21:49:16.384120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:57.588 [2024-09-29 21:49:16.384135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:57.588 [2024-09-29 21:49:16.384141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:57.588 [2024-09-29 21:49:16.384156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.588 [2024-09-29 21:49:16.384170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:57.588 [2024-09-29 21:49:16.384177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.588 [2024-09-29 21:49:16.384190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:57.588 [2024-09-29 21:49:16.384198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.588 [2024-09-29 21:49:16.384210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:57.588 [2024-09-29 21:49:16.384218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.588 [2024-09-29 21:49:16.384231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:57.588 [2024-09-29 21:49:16.384238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.588 [2024-09-29 21:49:16.384251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:57.588 [2024-09-29 21:49:16.384258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:57.588 [2024-09-29 21:49:16.384265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.588 [2024-09-29 21:49:16.384272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:57.588 [2024-09-29 21:49:16.384278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:57.588 [2024-09-29 21:49:16.384285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:57.588 [2024-09-29 21:49:16.384299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:57.588 [2024-09-29 21:49:16.384308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384316] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:57.588 [2024-09-29 21:49:16.384323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:57.588 [2024-09-29 21:49:16.384331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.588 [2024-09-29 21:49:16.384338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.588 [2024-09-29 21:49:16.384346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:57.588 [2024-09-29 21:49:16.384352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:57.588 [2024-09-29 21:49:16.384360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:57.588 [2024-09-29 21:49:16.384367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:57.588 [2024-09-29 21:49:16.384373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:57.588 [2024-09-29 21:49:16.384380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:57.588 [2024-09-29 21:49:16.384401] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:57.588 [2024-09-29 21:49:16.384414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.588 [2024-09-29 21:49:16.384422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:57.588 [2024-09-29 21:49:16.384430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:57.588 [2024-09-29 21:49:16.384437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:57.588 [2024-09-29 21:49:16.384445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:57.588 [2024-09-29 21:49:16.384452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:57.588 [2024-09-29 21:49:16.384460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:57.588 [2024-09-29 21:49:16.384467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:57.588 [2024-09-29 21:49:16.384477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:57.588 [2024-09-29 21:49:16.384484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:57.588 [2024-09-29 21:49:16.384492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:57.588 [2024-09-29 21:49:16.384499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:57.588 [2024-09-29 21:49:16.384507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:57.588 [2024-09-29 21:49:16.384514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:57.588 [2024-09-29 21:49:16.384522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:57.588 [2024-09-29 21:49:16.384529] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:57.588 [2024-09-29 21:49:16.384538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.588 [2024-09-29 21:49:16.384546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:57.588 [2024-09-29 21:49:16.384554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:57.588 [2024-09-29 21:49:16.384561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:57.589 [2024-09-29 21:49:16.384571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:57.589 [2024-09-29 21:49:16.384578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.384588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:57.589 [2024-09-29 21:49:16.384595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:17:57.589 [2024-09-29 21:49:16.384603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.419915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.419959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.589 [2024-09-29 21:49:16.419973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.262 ms 00:17:57.589 [2024-09-29 21:49:16.419982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.420129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.420143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:57.589 [2024-09-29 21:49:16.420152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:57.589 [2024-09-29 21:49:16.420160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.452906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.452936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.589 [2024-09-29 21:49:16.452946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.724 ms 00:17:57.589 [2024-09-29 21:49:16.452954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.453036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.453047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.589 [2024-09-29 21:49:16.453056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.589 [2024-09-29 21:49:16.453064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.453505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.453522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.589 [2024-09-29 21:49:16.453531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:17:57.589 [2024-09-29 21:49:16.453539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.453682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.453692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.589 [2024-09-29 21:49:16.453700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:57.589 [2024-09-29 21:49:16.453707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.467601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.467627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.589 [2024-09-29 21:49:16.467637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.872 ms 00:17:57.589 [2024-09-29 21:49:16.467645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.480891] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:57.589 [2024-09-29 21:49:16.480922] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:57.589 [2024-09-29 21:49:16.480933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.480941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:57.589 [2024-09-29 21:49:16.480950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.188 ms 00:17:57.589 [2024-09-29 21:49:16.480958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.505683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.505711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:57.589 [2024-09-29 21:49:16.505727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.655 ms 00:17:57.589 [2024-09-29 21:49:16.505735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.517219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.517246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:57.589 [2024-09-29 21:49:16.517255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.418 ms 00:17:57.589 [2024-09-29 21:49:16.517263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.528476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.528502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:57.589 [2024-09-29 21:49:16.528512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.152 ms 00:17:57.589 [2024-09-29 21:49:16.528519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.589 [2024-09-29 21:49:16.529132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.589 [2024-09-29 21:49:16.529154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:57.589 [2024-09-29 21:49:16.529164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:17:57.589 [2024-09-29 21:49:16.529172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.851 [2024-09-29 21:49:16.588524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.851 [2024-09-29 21:49:16.588559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:57.851 [2024-09-29 21:49:16.588571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.330 ms 00:17:57.851 [2024-09-29 21:49:16.588579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.851 [2024-09-29 21:49:16.599427] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:57.851 [2024-09-29 21:49:16.615731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.851 [2024-09-29 21:49:16.615766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:57.851 [2024-09-29 21:49:16.615779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.061 ms 00:17:57.851 [2024-09-29 21:49:16.615787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.851 [2024-09-29 21:49:16.615875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.851 [2024-09-29 21:49:16.615887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:57.851 [2024-09-29 21:49:16.615896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:57.851 [2024-09-29 21:49:16.615905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.851 [2024-09-29 21:49:16.615960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.851 [2024-09-29 21:49:16.615971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:57.851 [2024-09-29 21:49:16.615980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:57.851 [2024-09-29 21:49:16.615988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.851 [2024-09-29 21:49:16.616010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.851 [2024-09-29 21:49:16.616019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:57.851 [2024-09-29 21:49:16.616026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:57.851 [2024-09-29 21:49:16.616034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.851 [2024-09-29 21:49:16.616069] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:57.851 [2024-09-29 21:49:16.616080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.851 [2024-09-29 21:49:16.616090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:57.851 [2024-09-29 21:49:16.616098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:57.851 [2024-09-29 21:49:16.616106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.851 [2024-09-29 21:49:16.640127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.851 [2024-09-29 21:49:16.640155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:57.851 [2024-09-29 21:49:16.640166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.001 ms 00:17:57.851 [2024-09-29 21:49:16.640175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.851 [2024-09-29 21:49:16.640268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.851 [2024-09-29 21:49:16.640281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:57.851 [2024-09-29 21:49:16.640291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:57.851 [2024-09-29 21:49:16.640300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.851 [2024-09-29 21:49:16.641493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:57.851 [2024-09-29 21:49:16.644571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.144 ms, result 0 00:17:57.851 [2024-09-29 21:49:16.645250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.851 [2024-09-29 21:49:16.658368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:04.155  Copying: 44/256 [MB] (44 MBps) Copying: 85/256 [MB] (41 MBps) Copying: 129/256 [MB] (43 MBps) Copying: 174/256 [MB] (45 MBps) Copying: 216/256 [MB] (41 MBps) Copying: 256/256 [MB] (average 43 MBps)[2024-09-29 21:49:23.066526] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:04.155 [2024-09-29 21:49:23.078862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.155 [2024-09-29 21:49:23.078903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:04.155 [2024-09-29 21:49:23.078919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:04.155 [2024-09-29 21:49:23.078928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.155 [2024-09-29 21:49:23.078952] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:04.155 [2024-09-29 21:49:23.081762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.155 [2024-09-29 21:49:23.081791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:04.155 [2024-09-29 21:49:23.081802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:18:04.155 [2024-09-29 21:49:23.081811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.155 [2024-09-29 21:49:23.082088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.155 [2024-09-29 21:49:23.082113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:04.155 [2024-09-29 21:49:23.082122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:04.155 [2024-09-29 21:49:23.082130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.155 [2024-09-29 21:49:23.085835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.155 [2024-09-29 21:49:23.085856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:04.155 [2024-09-29 21:49:23.085865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.673 ms 00:18:04.155 [2024-09-29 21:49:23.085873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.155 [2024-09-29 21:49:23.094441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.155 [2024-09-29 21:49:23.094476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:04.155 [2024-09-29 21:49:23.094490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.549 ms 00:18:04.155 [2024-09-29 21:49:23.094499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.155 [2024-09-29 21:49:23.119346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.155 [2024-09-29 21:49:23.119382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:04.155 [2024-09-29 21:49:23.119401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.783 ms 00:18:04.155 [2024-09-29 21:49:23.119409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.155 [2024-09-29 21:49:23.133350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.155 [2024-09-29 21:49:23.133382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:04.155 [2024-09-29 21:49:23.133407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.902 ms 00:18:04.155 [2024-09-29 21:49:23.133415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.155 [2024-09-29 21:49:23.133559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.155 [2024-09-29 21:49:23.133570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:04.155 [2024-09-29 21:49:23.133580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:04.155 [2024-09-29 21:49:23.133588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.415 [2024-09-29 21:49:23.156918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.415 [2024-09-29 21:49:23.156948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:04.415 [2024-09-29 21:49:23.156958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.308 ms 00:18:04.415 [2024-09-29 21:49:23.156966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.415 [2024-09-29 21:49:23.179890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.415 [2024-09-29 21:49:23.179920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:04.415 [2024-09-29 21:49:23.179931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.901 ms 00:18:04.415 [2024-09-29 21:49:23.179938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.415 [2024-09-29 21:49:23.202158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.415 [2024-09-29 21:49:23.202190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:04.415 [2024-09-29 21:49:23.202200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.200 ms 00:18:04.415 [2024-09-29 21:49:23.202208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.415 [2024-09-29 21:49:23.224902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.415 [2024-09-29 21:49:23.224932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:04.415 [2024-09-29 21:49:23.224942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.644 ms 00:18:04.415 [2024-09-29 21:49:23.224950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.415 [2024-09-29 21:49:23.224971] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:04.415 [2024-09-29 21:49:23.224986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:04.415 [2024-09-29 21:49:23.224996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:04.416 [2024-09-29 21:49:23.225683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:04.417 [2024-09-29 21:49:23.225800] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:04.417 [2024-09-29 21:49:23.225814] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1605127c-fec2-4388-9e4a-c6c135288dc4 00:18:04.417 [2024-09-29 21:49:23.225824] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:04.417 [2024-09-29 21:49:23.225831] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:04.417 [2024-09-29 21:49:23.225839] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:04.417 [2024-09-29 21:49:23.225849] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:04.417 [2024-09-29 21:49:23.225856] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:04.417 [2024-09-29 21:49:23.225864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:04.417 [2024-09-29 21:49:23.225871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:04.417 [2024-09-29 21:49:23.225879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:04.417 [2024-09-29 21:49:23.225886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:04.417 [2024-09-29 21:49:23.225894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.417 [2024-09-29 21:49:23.225901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:04.417 [2024-09-29 21:49:23.225910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:18:04.417 [2024-09-29 21:49:23.225917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.417 [2024-09-29 21:49:23.238875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.417 [2024-09-29 21:49:23.238908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:04.417 [2024-09-29 21:49:23.238918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.930 ms 00:18:04.417 [2024-09-29 21:49:23.238925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.417 [2024-09-29 21:49:23.239285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.417 [2024-09-29 21:49:23.239303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:04.417 [2024-09-29 21:49:23.239312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:18:04.417 [2024-09-29 21:49:23.239320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.417 [2024-09-29 21:49:23.271326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.417 [2024-09-29 21:49:23.271361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:04.417 [2024-09-29 21:49:23.271370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.417 [2024-09-29 21:49:23.271379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.417 [2024-09-29 21:49:23.271487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.417 [2024-09-29 21:49:23.271498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:04.417 [2024-09-29 21:49:23.271506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.417 [2024-09-29 21:49:23.271514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.417 [2024-09-29 21:49:23.271556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.417 [2024-09-29 21:49:23.271569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:04.417 [2024-09-29 21:49:23.271577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.417 [2024-09-29 21:49:23.271584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.417 [2024-09-29 21:49:23.271603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.417 [2024-09-29 21:49:23.271611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:04.417 [2024-09-29 21:49:23.271619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.417 [2024-09-29 21:49:23.271626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.417 [2024-09-29 21:49:23.353011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.417 [2024-09-29 21:49:23.353065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:04.417 [2024-09-29 21:49:23.353078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.417 [2024-09-29 21:49:23.353086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.676 [2024-09-29 21:49:23.419646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.676 [2024-09-29 21:49:23.419697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.676 [2024-09-29 21:49:23.419710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.676 [2024-09-29 21:49:23.419719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.676 [2024-09-29 21:49:23.419800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.676 [2024-09-29 21:49:23.419810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:04.676 [2024-09-29 21:49:23.419824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.676 [2024-09-29 21:49:23.419831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.676 [2024-09-29 21:49:23.419862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.676 [2024-09-29 21:49:23.419871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:04.676 [2024-09-29 21:49:23.419878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.676 [2024-09-29 21:49:23.419886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.676 [2024-09-29 21:49:23.419977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.676 [2024-09-29 21:49:23.419988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:04.676 [2024-09-29 21:49:23.419997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.676 [2024-09-29 21:49:23.420007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.676 [2024-09-29 21:49:23.420038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.676 [2024-09-29 21:49:23.420055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:04.676 [2024-09-29 21:49:23.420063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.676 [2024-09-29 21:49:23.420072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.676 [2024-09-29 21:49:23.420111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.676 [2024-09-29 21:49:23.420120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:04.676 [2024-09-29 21:49:23.420128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.676 [2024-09-29 21:49:23.420139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.676 [2024-09-29 21:49:23.420184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.676 [2024-09-29 21:49:23.420200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:04.676 [2024-09-29 21:49:23.420208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.676 [2024-09-29 21:49:23.420216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.676 [2024-09-29 21:49:23.420365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.496 ms, result 0 00:18:05.612 00:18:05.612 00:18:05.612 21:49:24 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:05.871 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:05.871 21:49:24 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:05.871 21:49:24 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:05.871 21:49:24 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:05.871 21:49:24 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:05.871 21:49:24 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:05.871 21:49:24 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:06.130 21:49:24 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74491 00:18:06.130 21:49:24 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74491 ']' 00:18:06.130 21:49:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74491 00:18:06.130 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74491) - No such process 00:18:06.130 Process with pid 74491 is not found 00:18:06.130 21:49:24 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74491 is not found' 00:18:06.130 00:18:06.130 real 0m51.274s 00:18:06.130 user 1m16.003s 00:18:06.130 sys 0m5.516s 00:18:06.130 21:49:24 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:06.130 21:49:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:06.130 ************************************ 00:18:06.130 END TEST ftl_trim 00:18:06.130 ************************************ 00:18:06.130 21:49:24 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:06.130 21:49:24 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:06.130 21:49:24 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:06.130 21:49:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:06.130 ************************************ 00:18:06.130 START TEST ftl_restore 00:18:06.130 ************************************ 00:18:06.130 21:49:24 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:06.130 * Looking for test storage... 00:18:06.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:06.130 21:49:24 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:06.130 21:49:24 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:18:06.130 21:49:24 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:06.130 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.130 21:49:25 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:06.130 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.130 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:06.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.130 --rc genhtml_branch_coverage=1 00:18:06.130 --rc genhtml_function_coverage=1 00:18:06.130 --rc genhtml_legend=1 00:18:06.130 --rc geninfo_all_blocks=1 00:18:06.130 --rc geninfo_unexecuted_blocks=1 00:18:06.130 00:18:06.130 ' 00:18:06.130 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:06.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.130 --rc genhtml_branch_coverage=1 00:18:06.130 --rc genhtml_function_coverage=1 00:18:06.130 --rc genhtml_legend=1 00:18:06.130 --rc geninfo_all_blocks=1 00:18:06.130 --rc geninfo_unexecuted_blocks=1 00:18:06.130 00:18:06.130 ' 00:18:06.130 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:06.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.130 --rc genhtml_branch_coverage=1 00:18:06.130 --rc genhtml_function_coverage=1 00:18:06.130 --rc genhtml_legend=1 00:18:06.130 --rc geninfo_all_blocks=1 00:18:06.130 --rc geninfo_unexecuted_blocks=1 00:18:06.130 00:18:06.130 ' 00:18:06.130 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:06.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.130 --rc genhtml_branch_coverage=1 00:18:06.130 --rc genhtml_function_coverage=1 00:18:06.130 --rc genhtml_legend=1 00:18:06.130 --rc geninfo_all_blocks=1 00:18:06.130 --rc geninfo_unexecuted_blocks=1 00:18:06.130 00:18:06.130 ' 00:18:06.130 21:49:25 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:06.130 21:49:25 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:06.130 21:49:25 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:06.130 21:49:25 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:06.130 21:49:25 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:06.130 21:49:25 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:06.130 21:49:25 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:06.130 21:49:25 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:06.130 21:49:25 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:06.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.0I8IokdTGu 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74712 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74712 00:18:06.131 21:49:25 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:06.131 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74712 ']' 00:18:06.131 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.131 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:06.131 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.131 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:06.131 21:49:25 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:06.389 [2024-09-29 21:49:25.135754] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:06.389 [2024-09-29 21:49:25.135879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74712 ] 00:18:06.389 [2024-09-29 21:49:25.287249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.648 [2024-09-29 21:49:25.499479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.214 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:07.214 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:07.214 21:49:26 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:07.214 21:49:26 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:07.214 21:49:26 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:07.214 21:49:26 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:07.214 21:49:26 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:07.214 21:49:26 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:07.473 21:49:26 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:07.473 21:49:26 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:07.473 21:49:26 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:07.473 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:07.473 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:07.473 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:07.473 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:07.473 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:07.731 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:07.731 { 00:18:07.731 "name": "nvme0n1", 00:18:07.731 "aliases": [ 00:18:07.731 "0ecc44e5-4aa8-4bfc-a6f3-47f4c288b61c" 00:18:07.731 ], 00:18:07.731 "product_name": "NVMe disk", 00:18:07.731 "block_size": 4096, 00:18:07.731 "num_blocks": 1310720, 00:18:07.731 "uuid": "0ecc44e5-4aa8-4bfc-a6f3-47f4c288b61c", 00:18:07.731 "numa_id": -1, 00:18:07.731 "assigned_rate_limits": { 00:18:07.731 "rw_ios_per_sec": 0, 00:18:07.731 "rw_mbytes_per_sec": 0, 00:18:07.731 "r_mbytes_per_sec": 0, 00:18:07.731 "w_mbytes_per_sec": 0 00:18:07.731 }, 00:18:07.731 "claimed": true, 00:18:07.731 "claim_type": "read_many_write_one", 00:18:07.731 "zoned": false, 00:18:07.731 "supported_io_types": { 00:18:07.731 "read": true, 00:18:07.731 "write": true, 00:18:07.731 "unmap": true, 00:18:07.731 "flush": true, 00:18:07.731 "reset": true, 00:18:07.731 "nvme_admin": true, 00:18:07.731 "nvme_io": true, 00:18:07.731 "nvme_io_md": false, 00:18:07.731 "write_zeroes": true, 00:18:07.731 "zcopy": false, 00:18:07.731 "get_zone_info": false, 00:18:07.731 "zone_management": false, 00:18:07.731 "zone_append": false, 00:18:07.731 "compare": true, 00:18:07.731 "compare_and_write": false, 00:18:07.731 "abort": true, 00:18:07.731 "seek_hole": false, 00:18:07.731 "seek_data": false, 00:18:07.731 "copy": true, 00:18:07.731 "nvme_iov_md": false 00:18:07.731 }, 00:18:07.731 "driver_specific": { 00:18:07.731 "nvme": [ 00:18:07.731 { 00:18:07.731 "pci_address": "0000:00:11.0", 00:18:07.731 "trid": { 00:18:07.731 "trtype": "PCIe", 00:18:07.731 "traddr": "0000:00:11.0" 00:18:07.731 }, 00:18:07.731 "ctrlr_data": { 00:18:07.731 "cntlid": 0, 00:18:07.731 "vendor_id": "0x1b36", 00:18:07.731 "model_number": "QEMU NVMe Ctrl", 00:18:07.731 "serial_number": "12341", 00:18:07.731 "firmware_revision": "8.0.0", 00:18:07.731 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:07.731 "oacs": { 00:18:07.731 "security": 0, 00:18:07.731 "format": 1, 00:18:07.731 "firmware": 0, 00:18:07.731 "ns_manage": 1 00:18:07.731 }, 00:18:07.731 "multi_ctrlr": false, 00:18:07.731 "ana_reporting": false 00:18:07.731 }, 00:18:07.731 "vs": { 00:18:07.731 "nvme_version": "1.4" 00:18:07.731 }, 00:18:07.731 "ns_data": { 00:18:07.731 "id": 1, 00:18:07.731 "can_share": false 00:18:07.731 } 00:18:07.731 } 00:18:07.731 ], 00:18:07.731 "mp_policy": "active_passive" 00:18:07.731 } 00:18:07.731 } 00:18:07.731 ]' 00:18:07.731 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:07.731 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:07.731 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:07.731 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:07.731 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:07.731 21:49:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:07.731 21:49:26 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:07.732 21:49:26 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:07.732 21:49:26 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:07.732 21:49:26 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:07.732 21:49:26 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:07.990 21:49:26 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=07046a8a-96ee-477f-b0ab-146d3fa3cdde 00:18:07.990 21:49:26 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:07.990 21:49:26 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 07046a8a-96ee-477f-b0ab-146d3fa3cdde 00:18:08.248 21:49:27 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=89b6503d-9088-4113-8709-afeab6cd0336 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 89b6503d-9088-4113-8709-afeab6cd0336 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=06adf798-be87-4032-94fa-006b039f7a24 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 06adf798-be87-4032-94fa-006b039f7a24 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=06adf798-be87-4032-94fa-006b039f7a24 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:08.506 21:49:27 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 06adf798-be87-4032-94fa-006b039f7a24 00:18:08.506 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=06adf798-be87-4032-94fa-006b039f7a24 00:18:08.506 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:08.506 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:08.506 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:08.506 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06adf798-be87-4032-94fa-006b039f7a24 00:18:08.765 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:08.765 { 00:18:08.765 "name": "06adf798-be87-4032-94fa-006b039f7a24", 00:18:08.765 "aliases": [ 00:18:08.765 "lvs/nvme0n1p0" 00:18:08.765 ], 00:18:08.765 "product_name": "Logical Volume", 00:18:08.765 "block_size": 4096, 00:18:08.765 "num_blocks": 26476544, 00:18:08.765 "uuid": "06adf798-be87-4032-94fa-006b039f7a24", 00:18:08.765 "assigned_rate_limits": { 00:18:08.765 "rw_ios_per_sec": 0, 00:18:08.765 "rw_mbytes_per_sec": 0, 00:18:08.765 "r_mbytes_per_sec": 0, 00:18:08.765 "w_mbytes_per_sec": 0 00:18:08.765 }, 00:18:08.765 "claimed": false, 00:18:08.765 "zoned": false, 00:18:08.765 "supported_io_types": { 00:18:08.765 "read": true, 00:18:08.765 "write": true, 00:18:08.765 "unmap": true, 00:18:08.765 "flush": false, 00:18:08.765 "reset": true, 00:18:08.765 "nvme_admin": false, 00:18:08.765 "nvme_io": false, 00:18:08.765 "nvme_io_md": false, 00:18:08.765 "write_zeroes": true, 00:18:08.765 "zcopy": false, 00:18:08.765 "get_zone_info": false, 00:18:08.765 "zone_management": false, 00:18:08.765 "zone_append": false, 00:18:08.765 "compare": false, 00:18:08.765 "compare_and_write": false, 00:18:08.765 "abort": false, 00:18:08.765 "seek_hole": true, 00:18:08.765 "seek_data": true, 00:18:08.765 "copy": false, 00:18:08.765 "nvme_iov_md": false 00:18:08.765 }, 00:18:08.765 "driver_specific": { 00:18:08.765 "lvol": { 00:18:08.765 "lvol_store_uuid": "89b6503d-9088-4113-8709-afeab6cd0336", 00:18:08.765 "base_bdev": "nvme0n1", 00:18:08.765 "thin_provision": true, 00:18:08.765 "num_allocated_clusters": 0, 00:18:08.765 "snapshot": false, 00:18:08.765 "clone": false, 00:18:08.765 "esnap_clone": false 00:18:08.765 } 00:18:08.765 } 00:18:08.765 } 00:18:08.765 ]' 00:18:08.765 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:08.765 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:08.765 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:09.023 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:09.023 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:09.023 21:49:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:09.023 21:49:27 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:09.023 21:49:27 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:09.023 21:49:27 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:09.284 21:49:28 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:09.284 21:49:28 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:09.284 21:49:28 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 06adf798-be87-4032-94fa-006b039f7a24 00:18:09.284 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=06adf798-be87-4032-94fa-006b039f7a24 00:18:09.284 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:09.284 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:09.284 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:09.284 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06adf798-be87-4032-94fa-006b039f7a24 00:18:09.284 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:09.284 { 00:18:09.284 "name": "06adf798-be87-4032-94fa-006b039f7a24", 00:18:09.284 "aliases": [ 00:18:09.284 "lvs/nvme0n1p0" 00:18:09.284 ], 00:18:09.284 "product_name": "Logical Volume", 00:18:09.284 "block_size": 4096, 00:18:09.284 "num_blocks": 26476544, 00:18:09.284 "uuid": "06adf798-be87-4032-94fa-006b039f7a24", 00:18:09.284 "assigned_rate_limits": { 00:18:09.284 "rw_ios_per_sec": 0, 00:18:09.284 "rw_mbytes_per_sec": 0, 00:18:09.284 "r_mbytes_per_sec": 0, 00:18:09.284 "w_mbytes_per_sec": 0 00:18:09.284 }, 00:18:09.284 "claimed": false, 00:18:09.284 "zoned": false, 00:18:09.284 "supported_io_types": { 00:18:09.284 "read": true, 00:18:09.284 "write": true, 00:18:09.284 "unmap": true, 00:18:09.284 "flush": false, 00:18:09.284 "reset": true, 00:18:09.284 "nvme_admin": false, 00:18:09.284 "nvme_io": false, 00:18:09.284 "nvme_io_md": false, 00:18:09.284 "write_zeroes": true, 00:18:09.284 "zcopy": false, 00:18:09.284 "get_zone_info": false, 00:18:09.284 "zone_management": false, 00:18:09.284 "zone_append": false, 00:18:09.284 "compare": false, 00:18:09.284 "compare_and_write": false, 00:18:09.284 "abort": false, 00:18:09.284 "seek_hole": true, 00:18:09.284 "seek_data": true, 00:18:09.284 "copy": false, 00:18:09.284 "nvme_iov_md": false 00:18:09.284 }, 00:18:09.284 "driver_specific": { 00:18:09.284 "lvol": { 00:18:09.284 "lvol_store_uuid": "89b6503d-9088-4113-8709-afeab6cd0336", 00:18:09.284 "base_bdev": "nvme0n1", 00:18:09.284 "thin_provision": true, 00:18:09.284 "num_allocated_clusters": 0, 00:18:09.284 "snapshot": false, 00:18:09.284 "clone": false, 00:18:09.284 "esnap_clone": false 00:18:09.284 } 00:18:09.284 } 00:18:09.284 } 00:18:09.284 ]' 00:18:09.284 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:09.284 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:09.284 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:09.549 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:09.549 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:09.549 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:09.549 21:49:28 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:09.549 21:49:28 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:09.549 21:49:28 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:09.549 21:49:28 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 06adf798-be87-4032-94fa-006b039f7a24 00:18:09.549 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=06adf798-be87-4032-94fa-006b039f7a24 00:18:09.549 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:09.549 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:09.549 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:09.549 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06adf798-be87-4032-94fa-006b039f7a24 00:18:09.808 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:09.808 { 00:18:09.808 "name": "06adf798-be87-4032-94fa-006b039f7a24", 00:18:09.808 "aliases": [ 00:18:09.808 "lvs/nvme0n1p0" 00:18:09.808 ], 00:18:09.808 "product_name": "Logical Volume", 00:18:09.808 "block_size": 4096, 00:18:09.808 "num_blocks": 26476544, 00:18:09.808 "uuid": "06adf798-be87-4032-94fa-006b039f7a24", 00:18:09.808 "assigned_rate_limits": { 00:18:09.808 "rw_ios_per_sec": 0, 00:18:09.808 "rw_mbytes_per_sec": 0, 00:18:09.808 "r_mbytes_per_sec": 0, 00:18:09.808 "w_mbytes_per_sec": 0 00:18:09.808 }, 00:18:09.808 "claimed": false, 00:18:09.808 "zoned": false, 00:18:09.808 "supported_io_types": { 00:18:09.808 "read": true, 00:18:09.808 "write": true, 00:18:09.808 "unmap": true, 00:18:09.808 "flush": false, 00:18:09.808 "reset": true, 00:18:09.808 "nvme_admin": false, 00:18:09.808 "nvme_io": false, 00:18:09.808 "nvme_io_md": false, 00:18:09.808 "write_zeroes": true, 00:18:09.808 "zcopy": false, 00:18:09.808 "get_zone_info": false, 00:18:09.808 "zone_management": false, 00:18:09.808 "zone_append": false, 00:18:09.808 "compare": false, 00:18:09.808 "compare_and_write": false, 00:18:09.808 "abort": false, 00:18:09.808 "seek_hole": true, 00:18:09.808 "seek_data": true, 00:18:09.808 "copy": false, 00:18:09.808 "nvme_iov_md": false 00:18:09.808 }, 00:18:09.808 "driver_specific": { 00:18:09.808 "lvol": { 00:18:09.808 "lvol_store_uuid": "89b6503d-9088-4113-8709-afeab6cd0336", 00:18:09.808 "base_bdev": "nvme0n1", 00:18:09.808 "thin_provision": true, 00:18:09.808 "num_allocated_clusters": 0, 00:18:09.808 "snapshot": false, 00:18:09.808 "clone": false, 00:18:09.808 "esnap_clone": false 00:18:09.808 } 00:18:09.808 } 00:18:09.808 } 00:18:09.808 ]' 00:18:09.808 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:09.808 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:09.808 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:09.808 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:09.808 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:09.808 21:49:28 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:09.808 21:49:28 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:09.808 21:49:28 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 06adf798-be87-4032-94fa-006b039f7a24 --l2p_dram_limit 10' 00:18:09.808 21:49:28 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:09.808 21:49:28 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:09.808 21:49:28 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:09.808 21:49:28 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:09.808 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:09.808 21:49:28 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 06adf798-be87-4032-94fa-006b039f7a24 --l2p_dram_limit 10 -c nvc0n1p0 00:18:10.068 [2024-09-29 21:49:28.957327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.957396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:10.068 [2024-09-29 21:49:28.957412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.068 [2024-09-29 21:49:28.957420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.957471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.957479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.068 [2024-09-29 21:49:28.957488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:10.068 [2024-09-29 21:49:28.957494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.957517] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:10.068 [2024-09-29 21:49:28.958157] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:10.068 [2024-09-29 21:49:28.958181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.958189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.068 [2024-09-29 21:49:28.958197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:18:10.068 [2024-09-29 21:49:28.958206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.958332] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bc72902e-1a7b-4b64-8768-6432d06942bd 00:18:10.068 [2024-09-29 21:49:28.959642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.959669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:10.068 [2024-09-29 21:49:28.959677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:10.068 [2024-09-29 21:49:28.959686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.966566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.966593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.068 [2024-09-29 21:49:28.966601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.841 ms 00:18:10.068 [2024-09-29 21:49:28.966609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.966684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.966694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.068 [2024-09-29 21:49:28.966701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:10.068 [2024-09-29 21:49:28.966714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.966756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.966765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:10.068 [2024-09-29 21:49:28.966772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:10.068 [2024-09-29 21:49:28.966778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.966797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.068 [2024-09-29 21:49:28.970051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.970075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.068 [2024-09-29 21:49:28.970086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:18:10.068 [2024-09-29 21:49:28.970092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.970122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.970129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:10.068 [2024-09-29 21:49:28.970137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:10.068 [2024-09-29 21:49:28.970153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.970168] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:10.068 [2024-09-29 21:49:28.970278] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:10.068 [2024-09-29 21:49:28.970291] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:10.068 [2024-09-29 21:49:28.970299] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:10.068 [2024-09-29 21:49:28.970311] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:10.068 [2024-09-29 21:49:28.970318] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:10.068 [2024-09-29 21:49:28.970328] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:10.068 [2024-09-29 21:49:28.970335] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:10.068 [2024-09-29 21:49:28.970342] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:10.068 [2024-09-29 21:49:28.970348] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:10.068 [2024-09-29 21:49:28.970356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.970367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:10.068 [2024-09-29 21:49:28.970375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:18:10.068 [2024-09-29 21:49:28.970381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.970459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.068 [2024-09-29 21:49:28.970467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:10.068 [2024-09-29 21:49:28.970475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:10.068 [2024-09-29 21:49:28.970481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.068 [2024-09-29 21:49:28.970557] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:10.068 [2024-09-29 21:49:28.970570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:10.068 [2024-09-29 21:49:28.970579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.068 [2024-09-29 21:49:28.970585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.068 [2024-09-29 21:49:28.970593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:10.068 [2024-09-29 21:49:28.970598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:10.068 [2024-09-29 21:49:28.970605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:10.068 [2024-09-29 21:49:28.970610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:10.068 [2024-09-29 21:49:28.970620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:10.068 [2024-09-29 21:49:28.970625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.068 [2024-09-29 21:49:28.970632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:10.068 [2024-09-29 21:49:28.970637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:10.068 [2024-09-29 21:49:28.970644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.068 [2024-09-29 21:49:28.970650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:10.069 [2024-09-29 21:49:28.970657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:10.069 [2024-09-29 21:49:28.970662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:10.069 [2024-09-29 21:49:28.970676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:10.069 [2024-09-29 21:49:28.970683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:10.069 [2024-09-29 21:49:28.970696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.069 [2024-09-29 21:49:28.970707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:10.069 [2024-09-29 21:49:28.970712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.069 [2024-09-29 21:49:28.970723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:10.069 [2024-09-29 21:49:28.970730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.069 [2024-09-29 21:49:28.970742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:10.069 [2024-09-29 21:49:28.970747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.069 [2024-09-29 21:49:28.970759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:10.069 [2024-09-29 21:49:28.970767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.069 [2024-09-29 21:49:28.970778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:10.069 [2024-09-29 21:49:28.970784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:10.069 [2024-09-29 21:49:28.970790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.069 [2024-09-29 21:49:28.970795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:10.069 [2024-09-29 21:49:28.970802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:10.069 [2024-09-29 21:49:28.970807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:10.069 [2024-09-29 21:49:28.970820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:10.069 [2024-09-29 21:49:28.970827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970832] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:10.069 [2024-09-29 21:49:28.970839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:10.069 [2024-09-29 21:49:28.970846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.069 [2024-09-29 21:49:28.970854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.069 [2024-09-29 21:49:28.970861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:10.069 [2024-09-29 21:49:28.970870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:10.069 [2024-09-29 21:49:28.970875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:10.069 [2024-09-29 21:49:28.970881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:10.069 [2024-09-29 21:49:28.970887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:10.069 [2024-09-29 21:49:28.970893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:10.069 [2024-09-29 21:49:28.970901] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:10.069 [2024-09-29 21:49:28.970910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.069 [2024-09-29 21:49:28.970916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:10.069 [2024-09-29 21:49:28.970923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:10.069 [2024-09-29 21:49:28.970929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:10.069 [2024-09-29 21:49:28.970936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:10.069 [2024-09-29 21:49:28.970941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:10.069 [2024-09-29 21:49:28.970948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:10.069 [2024-09-29 21:49:28.970953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:10.069 [2024-09-29 21:49:28.970960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:10.069 [2024-09-29 21:49:28.970965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:10.069 [2024-09-29 21:49:28.970974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:10.069 [2024-09-29 21:49:28.970979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:10.069 [2024-09-29 21:49:28.970986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:10.069 [2024-09-29 21:49:28.970992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:10.069 [2024-09-29 21:49:28.970999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:10.069 [2024-09-29 21:49:28.971004] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:10.069 [2024-09-29 21:49:28.971013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.069 [2024-09-29 21:49:28.971019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:10.069 [2024-09-29 21:49:28.971027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:10.069 [2024-09-29 21:49:28.971033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:10.069 [2024-09-29 21:49:28.971040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:10.069 [2024-09-29 21:49:28.971046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.069 [2024-09-29 21:49:28.971054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:10.069 [2024-09-29 21:49:28.971060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:18:10.069 [2024-09-29 21:49:28.971067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.069 [2024-09-29 21:49:28.971112] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:10.069 [2024-09-29 21:49:28.971124] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:12.599 [2024-09-29 21:49:31.115006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.115068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:12.599 [2024-09-29 21:49:31.115081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2143.882 ms 00:18:12.599 [2024-09-29 21:49:31.115090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.139174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.139222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.599 [2024-09-29 21:49:31.139234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.912 ms 00:18:12.599 [2024-09-29 21:49:31.139242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.139359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.139370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:12.599 [2024-09-29 21:49:31.139378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:12.599 [2024-09-29 21:49:31.139402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.174521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.174565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.599 [2024-09-29 21:49:31.174580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.087 ms 00:18:12.599 [2024-09-29 21:49:31.174589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.174628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.174637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.599 [2024-09-29 21:49:31.174644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:12.599 [2024-09-29 21:49:31.174658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.175085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.175104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.599 [2024-09-29 21:49:31.175113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:18:12.599 [2024-09-29 21:49:31.175123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.175212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.175222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.599 [2024-09-29 21:49:31.175229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:12.599 [2024-09-29 21:49:31.175239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.191106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.191133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.599 [2024-09-29 21:49:31.191141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.832 ms 00:18:12.599 [2024-09-29 21:49:31.191149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.201204] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:12.599 [2024-09-29 21:49:31.204237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.204260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:12.599 [2024-09-29 21:49:31.204272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.023 ms 00:18:12.599 [2024-09-29 21:49:31.204279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.265618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.265658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:12.599 [2024-09-29 21:49:31.265677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.312 ms 00:18:12.599 [2024-09-29 21:49:31.265685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.265873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.265885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:12.599 [2024-09-29 21:49:31.265899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:12.599 [2024-09-29 21:49:31.265907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.289267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.289301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:12.599 [2024-09-29 21:49:31.289315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.313 ms 00:18:12.599 [2024-09-29 21:49:31.289323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.311725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.311752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:12.599 [2024-09-29 21:49:31.311765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.362 ms 00:18:12.599 [2024-09-29 21:49:31.311773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.312355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.312373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:12.599 [2024-09-29 21:49:31.312383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:18:12.599 [2024-09-29 21:49:31.312402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.380873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.380904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:12.599 [2024-09-29 21:49:31.380919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.436 ms 00:18:12.599 [2024-09-29 21:49:31.380930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.406117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.406166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:12.599 [2024-09-29 21:49:31.406180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.117 ms 00:18:12.599 [2024-09-29 21:49:31.406189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.429684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.429716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:12.599 [2024-09-29 21:49:31.429728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.467 ms 00:18:12.599 [2024-09-29 21:49:31.429736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.452872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.452909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:12.599 [2024-09-29 21:49:31.452923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.109 ms 00:18:12.599 [2024-09-29 21:49:31.452930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.452957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.452966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:12.599 [2024-09-29 21:49:31.452981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:12.599 [2024-09-29 21:49:31.452994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.453076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.599 [2024-09-29 21:49:31.453087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:12.599 [2024-09-29 21:49:31.453097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:12.599 [2024-09-29 21:49:31.453104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.599 [2024-09-29 21:49:31.454410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2496.599 ms, result 0 00:18:12.599 { 00:18:12.599 "name": "ftl0", 00:18:12.599 "uuid": "bc72902e-1a7b-4b64-8768-6432d06942bd" 00:18:12.599 } 00:18:12.599 21:49:31 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:12.599 21:49:31 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:12.857 21:49:31 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:12.857 21:49:31 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:13.117 [2024-09-29 21:49:31.877586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.877639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:13.117 [2024-09-29 21:49:31.877652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.117 [2024-09-29 21:49:31.877663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.877687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.117 [2024-09-29 21:49:31.880507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.880537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:13.117 [2024-09-29 21:49:31.880558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:18:13.117 [2024-09-29 21:49:31.880567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.880829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.880840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:13.117 [2024-09-29 21:49:31.880851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:18:13.117 [2024-09-29 21:49:31.880859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.884299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.884476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:13.117 [2024-09-29 21:49:31.884495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.423 ms 00:18:13.117 [2024-09-29 21:49:31.884505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.890726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.890754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:13.117 [2024-09-29 21:49:31.890768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.197 ms 00:18:13.117 [2024-09-29 21:49:31.890776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.912802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.912828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:13.117 [2024-09-29 21:49:31.912838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.950 ms 00:18:13.117 [2024-09-29 21:49:31.912844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.925176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.925205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:13.117 [2024-09-29 21:49:31.925216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.296 ms 00:18:13.117 [2024-09-29 21:49:31.925223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.925333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.925343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:13.117 [2024-09-29 21:49:31.925352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:13.117 [2024-09-29 21:49:31.925357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.943238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.943266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:13.117 [2024-09-29 21:49:31.943276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.863 ms 00:18:13.117 [2024-09-29 21:49:31.943282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.960772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.960799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:13.117 [2024-09-29 21:49:31.960809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.458 ms 00:18:13.117 [2024-09-29 21:49:31.960815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.977887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.977913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:13.117 [2024-09-29 21:49:31.977924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.040 ms 00:18:13.117 [2024-09-29 21:49:31.977929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.994605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.117 [2024-09-29 21:49:31.994631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:13.117 [2024-09-29 21:49:31.994640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.617 ms 00:18:13.117 [2024-09-29 21:49:31.994646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.117 [2024-09-29 21:49:31.994676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:13.117 [2024-09-29 21:49:31.994689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.994999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:13.118 [2024-09-29 21:49:31.995295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:13.119 [2024-09-29 21:49:31.995368] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:13.119 [2024-09-29 21:49:31.995378] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc72902e-1a7b-4b64-8768-6432d06942bd 00:18:13.119 [2024-09-29 21:49:31.995384] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:13.119 [2024-09-29 21:49:31.995406] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:13.119 [2024-09-29 21:49:31.995412] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:13.119 [2024-09-29 21:49:31.995420] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:13.119 [2024-09-29 21:49:31.995425] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:13.119 [2024-09-29 21:49:31.995433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:13.119 [2024-09-29 21:49:31.995441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:13.119 [2024-09-29 21:49:31.995449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:13.119 [2024-09-29 21:49:31.995454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:13.119 [2024-09-29 21:49:31.995461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.119 [2024-09-29 21:49:31.995467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:13.119 [2024-09-29 21:49:31.995476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:18:13.119 [2024-09-29 21:49:31.995482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.119 [2024-09-29 21:49:32.005256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.119 [2024-09-29 21:49:32.005280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:13.119 [2024-09-29 21:49:32.005290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.747 ms 00:18:13.119 [2024-09-29 21:49:32.005296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.119 [2024-09-29 21:49:32.005614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.119 [2024-09-29 21:49:32.005623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:13.119 [2024-09-29 21:49:32.005632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:13.119 [2024-09-29 21:49:32.005638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.119 [2024-09-29 21:49:32.036582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.119 [2024-09-29 21:49:32.036761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.119 [2024-09-29 21:49:32.036779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.119 [2024-09-29 21:49:32.036788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.119 [2024-09-29 21:49:32.036842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.119 [2024-09-29 21:49:32.036849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.119 [2024-09-29 21:49:32.036857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.119 [2024-09-29 21:49:32.036863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.119 [2024-09-29 21:49:32.036922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.119 [2024-09-29 21:49:32.036930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.119 [2024-09-29 21:49:32.036938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.119 [2024-09-29 21:49:32.036944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.119 [2024-09-29 21:49:32.036965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.119 [2024-09-29 21:49:32.036971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.119 [2024-09-29 21:49:32.036979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.119 [2024-09-29 21:49:32.036985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.378 [2024-09-29 21:49:32.100615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.378 [2024-09-29 21:49:32.100663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.378 [2024-09-29 21:49:32.100676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.378 [2024-09-29 21:49:32.100682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.378 [2024-09-29 21:49:32.152016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.378 [2024-09-29 21:49:32.152235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.378 [2024-09-29 21:49:32.152254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.378 [2024-09-29 21:49:32.152261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.378 [2024-09-29 21:49:32.152361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.378 [2024-09-29 21:49:32.152370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.378 [2024-09-29 21:49:32.152378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.378 [2024-09-29 21:49:32.152385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.378 [2024-09-29 21:49:32.152442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.378 [2024-09-29 21:49:32.152453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.378 [2024-09-29 21:49:32.152461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.378 [2024-09-29 21:49:32.152468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.378 [2024-09-29 21:49:32.152556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.378 [2024-09-29 21:49:32.152565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.378 [2024-09-29 21:49:32.152574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.378 [2024-09-29 21:49:32.152581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.378 [2024-09-29 21:49:32.152620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.378 [2024-09-29 21:49:32.152628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:13.378 [2024-09-29 21:49:32.152639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.378 [2024-09-29 21:49:32.152645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.378 [2024-09-29 21:49:32.152682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.378 [2024-09-29 21:49:32.152690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.378 [2024-09-29 21:49:32.152698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.378 [2024-09-29 21:49:32.152704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.378 [2024-09-29 21:49:32.152749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.378 [2024-09-29 21:49:32.152759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.378 [2024-09-29 21:49:32.152768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.378 [2024-09-29 21:49:32.152774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.378 [2024-09-29 21:49:32.152890] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.276 ms, result 0 00:18:13.378 true 00:18:13.378 21:49:32 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74712 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74712 ']' 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74712 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74712 00:18:13.378 killing process with pid 74712 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74712' 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74712 00:18:13.378 21:49:32 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74712 00:18:19.942 21:49:38 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:24.144 262144+0 records in 00:18:24.144 262144+0 records out 00:18:24.144 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.02411 s, 267 MB/s 00:18:24.144 21:49:42 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:25.521 21:49:44 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:25.521 [2024-09-29 21:49:44.384201] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:25.521 [2024-09-29 21:49:44.384519] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74930 ] 00:18:25.780 [2024-09-29 21:49:44.529358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.780 [2024-09-29 21:49:44.744523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.039 [2024-09-29 21:49:45.017923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:26.039 [2024-09-29 21:49:45.018001] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:26.299 [2024-09-29 21:49:45.173234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.173304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:26.299 [2024-09-29 21:49:45.173318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:26.299 [2024-09-29 21:49:45.173331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.173383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.173409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:26.299 [2024-09-29 21:49:45.173418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:26.299 [2024-09-29 21:49:45.173441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.173464] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:26.299 [2024-09-29 21:49:45.174130] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:26.299 [2024-09-29 21:49:45.174161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.174169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:26.299 [2024-09-29 21:49:45.174177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:18:26.299 [2024-09-29 21:49:45.174186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.175651] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:26.299 [2024-09-29 21:49:45.188446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.188657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:26.299 [2024-09-29 21:49:45.188678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.796 ms 00:18:26.299 [2024-09-29 21:49:45.188687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.188767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.188778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:26.299 [2024-09-29 21:49:45.188787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:26.299 [2024-09-29 21:49:45.188794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.195606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.195778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:26.299 [2024-09-29 21:49:45.195794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.755 ms 00:18:26.299 [2024-09-29 21:49:45.195802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.195886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.195896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:26.299 [2024-09-29 21:49:45.195905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:26.299 [2024-09-29 21:49:45.195913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.195970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.195981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:26.299 [2024-09-29 21:49:45.195989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:26.299 [2024-09-29 21:49:45.195997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.196022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:26.299 [2024-09-29 21:49:45.199586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.199615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:26.299 [2024-09-29 21:49:45.199625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.571 ms 00:18:26.299 [2024-09-29 21:49:45.199633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.199665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.199674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:26.299 [2024-09-29 21:49:45.199683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:26.299 [2024-09-29 21:49:45.199690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.199716] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:26.299 [2024-09-29 21:49:45.199737] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:26.299 [2024-09-29 21:49:45.199774] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:26.299 [2024-09-29 21:49:45.199791] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:26.299 [2024-09-29 21:49:45.199896] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:26.299 [2024-09-29 21:49:45.199907] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:26.299 [2024-09-29 21:49:45.199919] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:26.299 [2024-09-29 21:49:45.199932] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:26.299 [2024-09-29 21:49:45.199941] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:26.299 [2024-09-29 21:49:45.199949] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:26.299 [2024-09-29 21:49:45.199958] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:26.299 [2024-09-29 21:49:45.199966] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:26.299 [2024-09-29 21:49:45.199974] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:26.299 [2024-09-29 21:49:45.199982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.199990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:26.299 [2024-09-29 21:49:45.199998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:18:26.299 [2024-09-29 21:49:45.200005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.200088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.299 [2024-09-29 21:49:45.200099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:26.299 [2024-09-29 21:49:45.200107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:26.299 [2024-09-29 21:49:45.200114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.299 [2024-09-29 21:49:45.200230] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:26.299 [2024-09-29 21:49:45.200241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:26.299 [2024-09-29 21:49:45.200250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:26.299 [2024-09-29 21:49:45.200259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.299 [2024-09-29 21:49:45.200267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:26.299 [2024-09-29 21:49:45.200274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:26.299 [2024-09-29 21:49:45.200281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:26.299 [2024-09-29 21:49:45.200289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:26.299 [2024-09-29 21:49:45.200296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:26.299 [2024-09-29 21:49:45.200303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:26.299 [2024-09-29 21:49:45.200310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:26.299 [2024-09-29 21:49:45.200317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:26.299 [2024-09-29 21:49:45.200324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:26.299 [2024-09-29 21:49:45.200338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:26.299 [2024-09-29 21:49:45.200345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:26.299 [2024-09-29 21:49:45.200352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:26.300 [2024-09-29 21:49:45.200366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:26.300 [2024-09-29 21:49:45.200374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:26.300 [2024-09-29 21:49:45.200401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.300 [2024-09-29 21:49:45.200415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:26.300 [2024-09-29 21:49:45.200422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.300 [2024-09-29 21:49:45.200436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:26.300 [2024-09-29 21:49:45.200443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.300 [2024-09-29 21:49:45.200457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:26.300 [2024-09-29 21:49:45.200464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:26.300 [2024-09-29 21:49:45.200477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:26.300 [2024-09-29 21:49:45.200484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:26.300 [2024-09-29 21:49:45.200498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:26.300 [2024-09-29 21:49:45.200504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:26.300 [2024-09-29 21:49:45.200511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:26.300 [2024-09-29 21:49:45.200517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:26.300 [2024-09-29 21:49:45.200523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:26.300 [2024-09-29 21:49:45.200531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:26.300 [2024-09-29 21:49:45.200545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:26.300 [2024-09-29 21:49:45.200552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200559] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:26.300 [2024-09-29 21:49:45.200567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:26.300 [2024-09-29 21:49:45.200576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:26.300 [2024-09-29 21:49:45.200583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:26.300 [2024-09-29 21:49:45.200592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:26.300 [2024-09-29 21:49:45.200599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:26.300 [2024-09-29 21:49:45.200605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:26.300 [2024-09-29 21:49:45.200612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:26.300 [2024-09-29 21:49:45.200621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:26.300 [2024-09-29 21:49:45.200628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:26.300 [2024-09-29 21:49:45.200636] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:26.300 [2024-09-29 21:49:45.200646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:26.300 [2024-09-29 21:49:45.200654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:26.300 [2024-09-29 21:49:45.200662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:26.300 [2024-09-29 21:49:45.200669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:26.300 [2024-09-29 21:49:45.200676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:26.300 [2024-09-29 21:49:45.200683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:26.300 [2024-09-29 21:49:45.200691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:26.300 [2024-09-29 21:49:45.200698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:26.300 [2024-09-29 21:49:45.200705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:26.300 [2024-09-29 21:49:45.200712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:26.300 [2024-09-29 21:49:45.200719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:26.300 [2024-09-29 21:49:45.200726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:26.300 [2024-09-29 21:49:45.200734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:26.300 [2024-09-29 21:49:45.200742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:26.300 [2024-09-29 21:49:45.200749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:26.300 [2024-09-29 21:49:45.200756] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:26.300 [2024-09-29 21:49:45.200763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:26.300 [2024-09-29 21:49:45.200772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:26.300 [2024-09-29 21:49:45.200780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:26.300 [2024-09-29 21:49:45.200787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:26.300 [2024-09-29 21:49:45.200794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:26.300 [2024-09-29 21:49:45.200802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.300 [2024-09-29 21:49:45.200809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:26.300 [2024-09-29 21:49:45.200816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:18:26.300 [2024-09-29 21:49:45.200823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.300 [2024-09-29 21:49:45.244182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.300 [2024-09-29 21:49:45.244245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:26.300 [2024-09-29 21:49:45.244260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.311 ms 00:18:26.300 [2024-09-29 21:49:45.244269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.300 [2024-09-29 21:49:45.244407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.300 [2024-09-29 21:49:45.244419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:26.300 [2024-09-29 21:49:45.244429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:26.300 [2024-09-29 21:49:45.244437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.300 [2024-09-29 21:49:45.276998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.300 [2024-09-29 21:49:45.277052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:26.300 [2024-09-29 21:49:45.277068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.474 ms 00:18:26.300 [2024-09-29 21:49:45.277077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.300 [2024-09-29 21:49:45.277136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.300 [2024-09-29 21:49:45.277145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:26.300 [2024-09-29 21:49:45.277153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:26.300 [2024-09-29 21:49:45.277162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.300 [2024-09-29 21:49:45.277682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.300 [2024-09-29 21:49:45.277701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:26.300 [2024-09-29 21:49:45.277711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:18:26.300 [2024-09-29 21:49:45.277723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.300 [2024-09-29 21:49:45.277865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.300 [2024-09-29 21:49:45.277877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:26.300 [2024-09-29 21:49:45.277885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:18:26.300 [2024-09-29 21:49:45.277893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.291346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.291383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:26.559 [2024-09-29 21:49:45.291406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.433 ms 00:18:26.559 [2024-09-29 21:49:45.291414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.304309] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:26.559 [2024-09-29 21:49:45.304346] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:26.559 [2024-09-29 21:49:45.304359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.304368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:26.559 [2024-09-29 21:49:45.304377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.827 ms 00:18:26.559 [2024-09-29 21:49:45.304587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.329103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.329146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:26.559 [2024-09-29 21:49:45.329158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.453 ms 00:18:26.559 [2024-09-29 21:49:45.329167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.340837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.340871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:26.559 [2024-09-29 21:49:45.340882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.609 ms 00:18:26.559 [2024-09-29 21:49:45.340889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.352133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.352296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:26.559 [2024-09-29 21:49:45.352313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.208 ms 00:18:26.559 [2024-09-29 21:49:45.352321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.352981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.353004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:26.559 [2024-09-29 21:49:45.353013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:18:26.559 [2024-09-29 21:49:45.353021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.411491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.411555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:26.559 [2024-09-29 21:49:45.411569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.450 ms 00:18:26.559 [2024-09-29 21:49:45.411577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.422582] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:26.559 [2024-09-29 21:49:45.425739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.425772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:26.559 [2024-09-29 21:49:45.425786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.094 ms 00:18:26.559 [2024-09-29 21:49:45.425794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.425918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.425930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:26.559 [2024-09-29 21:49:45.425939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:26.559 [2024-09-29 21:49:45.425947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.426022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.426033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:26.559 [2024-09-29 21:49:45.426041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:26.559 [2024-09-29 21:49:45.426049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.426069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.426081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:26.559 [2024-09-29 21:49:45.426090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:26.559 [2024-09-29 21:49:45.426098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.426131] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:26.559 [2024-09-29 21:49:45.426141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.426158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:26.559 [2024-09-29 21:49:45.426166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:26.559 [2024-09-29 21:49:45.426178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.450334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.559 [2024-09-29 21:49:45.450617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:26.559 [2024-09-29 21:49:45.450696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.136 ms 00:18:26.559 [2024-09-29 21:49:45.450722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.559 [2024-09-29 21:49:45.451051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:26.560 [2024-09-29 21:49:45.451200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:26.560 [2024-09-29 21:49:45.451263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:26.560 [2024-09-29 21:49:45.451287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:26.560 [2024-09-29 21:49:45.452573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.855 ms, result 0 00:18:49.061  Copying: 42/1024 [MB] (42 MBps) Copying: 91/1024 [MB] (49 MBps) Copying: 137/1024 [MB] (45 MBps) Copying: 180/1024 [MB] (43 MBps) Copying: 224/1024 [MB] (43 MBps) Copying: 268/1024 [MB] (44 MBps) Copying: 311/1024 [MB] (43 MBps) Copying: 355/1024 [MB] (43 MBps) Copying: 399/1024 [MB] (44 MBps) Copying: 443/1024 [MB] (44 MBps) Copying: 485/1024 [MB] (42 MBps) Copying: 533/1024 [MB] (47 MBps) Copying: 576/1024 [MB] (43 MBps) Copying: 622/1024 [MB] (45 MBps) Copying: 675/1024 [MB] (52 MBps) Copying: 727/1024 [MB] (52 MBps) Copying: 779/1024 [MB] (51 MBps) Copying: 823/1024 [MB] (43 MBps) Copying: 870/1024 [MB] (47 MBps) Copying: 915/1024 [MB] (44 MBps) Copying: 963/1024 [MB] (48 MBps) Copying: 1007/1024 [MB] (44 MBps) Copying: 1024/1024 [MB] (average 45 MBps)[2024-09-29 21:50:07.834026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.061 [2024-09-29 21:50:07.834081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:49.061 [2024-09-29 21:50:07.834096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:49.061 [2024-09-29 21:50:07.834104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.061 [2024-09-29 21:50:07.834127] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:49.061 [2024-09-29 21:50:07.836906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.061 [2024-09-29 21:50:07.836983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:49.061 [2024-09-29 21:50:07.836998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.764 ms 00:18:49.061 [2024-09-29 21:50:07.837006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.061 [2024-09-29 21:50:07.838823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.061 [2024-09-29 21:50:07.838865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:49.061 [2024-09-29 21:50:07.838877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.793 ms 00:18:49.061 [2024-09-29 21:50:07.838886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.061 [2024-09-29 21:50:07.851572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.061 [2024-09-29 21:50:07.851734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:49.061 [2024-09-29 21:50:07.851752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.671 ms 00:18:49.061 [2024-09-29 21:50:07.851760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.061 [2024-09-29 21:50:07.858150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.062 [2024-09-29 21:50:07.858191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:49.062 [2024-09-29 21:50:07.858201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.349 ms 00:18:49.062 [2024-09-29 21:50:07.858208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.062 [2024-09-29 21:50:07.882709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.062 [2024-09-29 21:50:07.882826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:49.062 [2024-09-29 21:50:07.882841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.449 ms 00:18:49.062 [2024-09-29 21:50:07.882849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.062 [2024-09-29 21:50:07.896912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.062 [2024-09-29 21:50:07.896944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:49.062 [2024-09-29 21:50:07.896959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.034 ms 00:18:49.062 [2024-09-29 21:50:07.896968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.062 [2024-09-29 21:50:07.897092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.062 [2024-09-29 21:50:07.897102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:49.062 [2024-09-29 21:50:07.897111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:49.062 [2024-09-29 21:50:07.897120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.062 [2024-09-29 21:50:07.920117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.062 [2024-09-29 21:50:07.920146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:49.062 [2024-09-29 21:50:07.920156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.983 ms 00:18:49.062 [2024-09-29 21:50:07.920164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.062 [2024-09-29 21:50:07.940321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.062 [2024-09-29 21:50:07.940345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:49.062 [2024-09-29 21:50:07.940353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.129 ms 00:18:49.062 [2024-09-29 21:50:07.940360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.062 [2024-09-29 21:50:07.957197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.062 [2024-09-29 21:50:07.957224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:49.062 [2024-09-29 21:50:07.957232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.812 ms 00:18:49.062 [2024-09-29 21:50:07.957238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.062 [2024-09-29 21:50:07.974146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.062 [2024-09-29 21:50:07.974175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:49.062 [2024-09-29 21:50:07.974183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.863 ms 00:18:49.062 [2024-09-29 21:50:07.974189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.062 [2024-09-29 21:50:07.974214] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:49.062 [2024-09-29 21:50:07.974225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:49.062 [2024-09-29 21:50:07.974607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:49.063 [2024-09-29 21:50:07.974856] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:49.063 [2024-09-29 21:50:07.974862] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc72902e-1a7b-4b64-8768-6432d06942bd 00:18:49.063 [2024-09-29 21:50:07.974868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:49.063 [2024-09-29 21:50:07.974874] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:49.063 [2024-09-29 21:50:07.974879] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:49.063 [2024-09-29 21:50:07.974886] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:49.063 [2024-09-29 21:50:07.974892] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:49.063 [2024-09-29 21:50:07.974897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:49.063 [2024-09-29 21:50:07.974906] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:49.063 [2024-09-29 21:50:07.974911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:49.063 [2024-09-29 21:50:07.974916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:49.063 [2024-09-29 21:50:07.974922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.063 [2024-09-29 21:50:07.974927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:49.063 [2024-09-29 21:50:07.974945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:18:49.063 [2024-09-29 21:50:07.974952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.063 [2024-09-29 21:50:07.984611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.063 [2024-09-29 21:50:07.984745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:49.063 [2024-09-29 21:50:07.984757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.647 ms 00:18:49.063 [2024-09-29 21:50:07.984764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.063 [2024-09-29 21:50:07.985057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.063 [2024-09-29 21:50:07.985064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:49.063 [2024-09-29 21:50:07.985071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:18:49.063 [2024-09-29 21:50:07.985078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.063 [2024-09-29 21:50:08.008147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.063 [2024-09-29 21:50:08.008267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:49.063 [2024-09-29 21:50:08.008279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.063 [2024-09-29 21:50:08.008290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.063 [2024-09-29 21:50:08.008340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.063 [2024-09-29 21:50:08.008347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:49.063 [2024-09-29 21:50:08.008353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.063 [2024-09-29 21:50:08.008360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.063 [2024-09-29 21:50:08.008419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.063 [2024-09-29 21:50:08.008428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:49.063 [2024-09-29 21:50:08.008434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.063 [2024-09-29 21:50:08.008441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.063 [2024-09-29 21:50:08.008455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.063 [2024-09-29 21:50:08.008462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:49.063 [2024-09-29 21:50:08.008469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.063 [2024-09-29 21:50:08.008475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.325 [2024-09-29 21:50:08.071345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.325 [2024-09-29 21:50:08.071518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:49.325 [2024-09-29 21:50:08.071539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.325 [2024-09-29 21:50:08.071546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.325 [2024-09-29 21:50:08.123109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.325 [2024-09-29 21:50:08.123281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:49.325 [2024-09-29 21:50:08.123296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.325 [2024-09-29 21:50:08.123303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.325 [2024-09-29 21:50:08.123376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.325 [2024-09-29 21:50:08.123384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:49.325 [2024-09-29 21:50:08.123404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.325 [2024-09-29 21:50:08.123411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.325 [2024-09-29 21:50:08.123440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.325 [2024-09-29 21:50:08.123451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:49.325 [2024-09-29 21:50:08.123459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.325 [2024-09-29 21:50:08.123465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.325 [2024-09-29 21:50:08.123543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.325 [2024-09-29 21:50:08.123552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:49.325 [2024-09-29 21:50:08.123559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.325 [2024-09-29 21:50:08.123565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.325 [2024-09-29 21:50:08.123592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.325 [2024-09-29 21:50:08.123599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:49.325 [2024-09-29 21:50:08.123609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.325 [2024-09-29 21:50:08.123615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.325 [2024-09-29 21:50:08.123650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.325 [2024-09-29 21:50:08.123657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:49.325 [2024-09-29 21:50:08.123664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.325 [2024-09-29 21:50:08.123670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.325 [2024-09-29 21:50:08.123712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.325 [2024-09-29 21:50:08.123722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:49.325 [2024-09-29 21:50:08.123728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.325 [2024-09-29 21:50:08.123735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.325 [2024-09-29 21:50:08.123836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 289.789 ms, result 0 00:18:50.705 00:18:50.705 00:18:50.705 21:50:09 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:50.705 [2024-09-29 21:50:09.569555] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:50.705 [2024-09-29 21:50:09.569676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75188 ] 00:18:50.964 [2024-09-29 21:50:09.717626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.964 [2024-09-29 21:50:09.880263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.223 [2024-09-29 21:50:10.109673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:51.223 [2024-09-29 21:50:10.109730] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:51.482 [2024-09-29 21:50:10.262641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.482 [2024-09-29 21:50:10.262681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:51.482 [2024-09-29 21:50:10.262693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:51.482 [2024-09-29 21:50:10.262702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.482 [2024-09-29 21:50:10.262737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.482 [2024-09-29 21:50:10.262745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:51.482 [2024-09-29 21:50:10.262751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:51.482 [2024-09-29 21:50:10.262757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.482 [2024-09-29 21:50:10.262770] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:51.482 [2024-09-29 21:50:10.263274] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:51.482 [2024-09-29 21:50:10.263291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.482 [2024-09-29 21:50:10.263297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:51.482 [2024-09-29 21:50:10.263304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:18:51.482 [2024-09-29 21:50:10.263309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.482 [2024-09-29 21:50:10.264563] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:51.482 [2024-09-29 21:50:10.274766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.482 [2024-09-29 21:50:10.274793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:51.482 [2024-09-29 21:50:10.274802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.204 ms 00:18:51.482 [2024-09-29 21:50:10.274809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.482 [2024-09-29 21:50:10.274851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.482 [2024-09-29 21:50:10.274860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:51.482 [2024-09-29 21:50:10.274867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:51.482 [2024-09-29 21:50:10.274873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.482 [2024-09-29 21:50:10.280952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.483 [2024-09-29 21:50:10.280977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:51.483 [2024-09-29 21:50:10.280984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.035 ms 00:18:51.483 [2024-09-29 21:50:10.280990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.483 [2024-09-29 21:50:10.281045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.483 [2024-09-29 21:50:10.281052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:51.483 [2024-09-29 21:50:10.281059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:51.483 [2024-09-29 21:50:10.281065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.483 [2024-09-29 21:50:10.281099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.483 [2024-09-29 21:50:10.281106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:51.483 [2024-09-29 21:50:10.281113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:51.483 [2024-09-29 21:50:10.281118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.483 [2024-09-29 21:50:10.281131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:51.483 [2024-09-29 21:50:10.284196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.483 [2024-09-29 21:50:10.284331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:51.483 [2024-09-29 21:50:10.284345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:18:51.483 [2024-09-29 21:50:10.284352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.483 [2024-09-29 21:50:10.284384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.483 [2024-09-29 21:50:10.284404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:51.483 [2024-09-29 21:50:10.284411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:51.483 [2024-09-29 21:50:10.284417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.483 [2024-09-29 21:50:10.284435] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:51.483 [2024-09-29 21:50:10.284452] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:51.483 [2024-09-29 21:50:10.284480] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:51.483 [2024-09-29 21:50:10.284494] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:51.483 [2024-09-29 21:50:10.284577] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:51.483 [2024-09-29 21:50:10.284587] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:51.483 [2024-09-29 21:50:10.284595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:51.483 [2024-09-29 21:50:10.284605] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:51.483 [2024-09-29 21:50:10.284612] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:51.483 [2024-09-29 21:50:10.284619] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:51.483 [2024-09-29 21:50:10.284626] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:51.483 [2024-09-29 21:50:10.284632] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:51.483 [2024-09-29 21:50:10.284638] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:51.483 [2024-09-29 21:50:10.284645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.483 [2024-09-29 21:50:10.284650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:51.483 [2024-09-29 21:50:10.284656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:18:51.483 [2024-09-29 21:50:10.284662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.483 [2024-09-29 21:50:10.284725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.483 [2024-09-29 21:50:10.284734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:51.483 [2024-09-29 21:50:10.284741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:51.483 [2024-09-29 21:50:10.284748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.483 [2024-09-29 21:50:10.284824] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:51.483 [2024-09-29 21:50:10.284833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:51.483 [2024-09-29 21:50:10.284839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:51.483 [2024-09-29 21:50:10.284845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.483 [2024-09-29 21:50:10.284851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:51.483 [2024-09-29 21:50:10.284856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:51.483 [2024-09-29 21:50:10.284862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:51.483 [2024-09-29 21:50:10.284867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:51.483 [2024-09-29 21:50:10.284873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:51.483 [2024-09-29 21:50:10.284879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:51.483 [2024-09-29 21:50:10.284884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:51.483 [2024-09-29 21:50:10.284893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:51.483 [2024-09-29 21:50:10.284899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:51.483 [2024-09-29 21:50:10.284908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:51.483 [2024-09-29 21:50:10.284914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:51.483 [2024-09-29 21:50:10.284919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.483 [2024-09-29 21:50:10.284924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:51.483 [2024-09-29 21:50:10.284929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:51.483 [2024-09-29 21:50:10.284934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.483 [2024-09-29 21:50:10.284939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:51.483 [2024-09-29 21:50:10.284944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:51.483 [2024-09-29 21:50:10.284949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.483 [2024-09-29 21:50:10.284955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:51.483 [2024-09-29 21:50:10.284960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:51.483 [2024-09-29 21:50:10.284964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.483 [2024-09-29 21:50:10.284969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:51.483 [2024-09-29 21:50:10.284974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:51.483 [2024-09-29 21:50:10.284979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.483 [2024-09-29 21:50:10.284984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:51.483 [2024-09-29 21:50:10.284989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:51.483 [2024-09-29 21:50:10.284994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:51.483 [2024-09-29 21:50:10.284999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:51.483 [2024-09-29 21:50:10.285003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:51.483 [2024-09-29 21:50:10.285008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:51.483 [2024-09-29 21:50:10.285013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:51.483 [2024-09-29 21:50:10.285018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:51.483 [2024-09-29 21:50:10.285023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:51.483 [2024-09-29 21:50:10.285028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:51.483 [2024-09-29 21:50:10.285033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:51.483 [2024-09-29 21:50:10.285039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.483 [2024-09-29 21:50:10.285044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:51.483 [2024-09-29 21:50:10.285050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:51.483 [2024-09-29 21:50:10.285054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.483 [2024-09-29 21:50:10.285062] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:51.483 [2024-09-29 21:50:10.285068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:51.483 [2024-09-29 21:50:10.285075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:51.483 [2024-09-29 21:50:10.285081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:51.483 [2024-09-29 21:50:10.285087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:51.483 [2024-09-29 21:50:10.285093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:51.483 [2024-09-29 21:50:10.285098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:51.483 [2024-09-29 21:50:10.285103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:51.483 [2024-09-29 21:50:10.285108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:51.483 [2024-09-29 21:50:10.285113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:51.483 [2024-09-29 21:50:10.285119] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:51.483 [2024-09-29 21:50:10.285126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:51.483 [2024-09-29 21:50:10.285132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:51.483 [2024-09-29 21:50:10.285138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:51.483 [2024-09-29 21:50:10.285144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:51.483 [2024-09-29 21:50:10.285149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:51.483 [2024-09-29 21:50:10.285154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:51.483 [2024-09-29 21:50:10.285160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:51.483 [2024-09-29 21:50:10.285165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:51.484 [2024-09-29 21:50:10.285170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:51.484 [2024-09-29 21:50:10.285175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:51.484 [2024-09-29 21:50:10.285180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:51.484 [2024-09-29 21:50:10.285186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:51.484 [2024-09-29 21:50:10.285191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:51.484 [2024-09-29 21:50:10.285196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:51.484 [2024-09-29 21:50:10.285202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:51.484 [2024-09-29 21:50:10.285207] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:51.484 [2024-09-29 21:50:10.285214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:51.484 [2024-09-29 21:50:10.285220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:51.484 [2024-09-29 21:50:10.285226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:51.484 [2024-09-29 21:50:10.285231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:51.484 [2024-09-29 21:50:10.285236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:51.484 [2024-09-29 21:50:10.285243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.285249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:51.484 [2024-09-29 21:50:10.285254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:18:51.484 [2024-09-29 21:50:10.285260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.324856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.324890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:51.484 [2024-09-29 21:50:10.324900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.546 ms 00:18:51.484 [2024-09-29 21:50:10.324907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.324984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.324992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:51.484 [2024-09-29 21:50:10.324998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:51.484 [2024-09-29 21:50:10.325004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.351264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.351410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:51.484 [2024-09-29 21:50:10.351429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.216 ms 00:18:51.484 [2024-09-29 21:50:10.351436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.351462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.351469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:51.484 [2024-09-29 21:50:10.351475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:51.484 [2024-09-29 21:50:10.351482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.351885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.351898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:51.484 [2024-09-29 21:50:10.351906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:18:51.484 [2024-09-29 21:50:10.351915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.352027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.352036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:51.484 [2024-09-29 21:50:10.352042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:51.484 [2024-09-29 21:50:10.352049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.363026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.363125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:51.484 [2024-09-29 21:50:10.363137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.961 ms 00:18:51.484 [2024-09-29 21:50:10.363144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.373305] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:51.484 [2024-09-29 21:50:10.373331] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:51.484 [2024-09-29 21:50:10.373341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.373348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:51.484 [2024-09-29 21:50:10.373355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.103 ms 00:18:51.484 [2024-09-29 21:50:10.373361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.391952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.391980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:51.484 [2024-09-29 21:50:10.391989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.545 ms 00:18:51.484 [2024-09-29 21:50:10.391996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.400860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.400885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:51.484 [2024-09-29 21:50:10.400893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.832 ms 00:18:51.484 [2024-09-29 21:50:10.400899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.409613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.409648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:51.484 [2024-09-29 21:50:10.409656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.682 ms 00:18:51.484 [2024-09-29 21:50:10.409661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.410116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.410131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:51.484 [2024-09-29 21:50:10.410139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:18:51.484 [2024-09-29 21:50:10.410145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.484 [2024-09-29 21:50:10.458145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.484 [2024-09-29 21:50:10.458183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:51.484 [2024-09-29 21:50:10.458193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.988 ms 00:18:51.484 [2024-09-29 21:50:10.458200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.742 [2024-09-29 21:50:10.466310] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:51.742 [2024-09-29 21:50:10.468413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.742 [2024-09-29 21:50:10.468437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:51.742 [2024-09-29 21:50:10.468446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.182 ms 00:18:51.742 [2024-09-29 21:50:10.468457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.742 [2024-09-29 21:50:10.468507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.742 [2024-09-29 21:50:10.468516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:51.742 [2024-09-29 21:50:10.468523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:51.742 [2024-09-29 21:50:10.468529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.742 [2024-09-29 21:50:10.468600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.742 [2024-09-29 21:50:10.468609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:51.742 [2024-09-29 21:50:10.468616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:51.742 [2024-09-29 21:50:10.468622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.742 [2024-09-29 21:50:10.468640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.742 [2024-09-29 21:50:10.468648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:51.742 [2024-09-29 21:50:10.468654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:51.742 [2024-09-29 21:50:10.468660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.742 [2024-09-29 21:50:10.468687] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:51.742 [2024-09-29 21:50:10.468696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.742 [2024-09-29 21:50:10.468702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:51.742 [2024-09-29 21:50:10.468711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:51.742 [2024-09-29 21:50:10.468717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.742 [2024-09-29 21:50:10.486660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.742 [2024-09-29 21:50:10.486685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:51.742 [2024-09-29 21:50:10.486693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.929 ms 00:18:51.742 [2024-09-29 21:50:10.486700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.742 [2024-09-29 21:50:10.486757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.742 [2024-09-29 21:50:10.486765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:51.742 [2024-09-29 21:50:10.486772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:51.742 [2024-09-29 21:50:10.486778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.742 [2024-09-29 21:50:10.487656] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.607 ms, result 0 00:19:13.991  Copying: 51/1024 [MB] (51 MBps) Copying: 93/1024 [MB] (41 MBps) Copying: 139/1024 [MB] (46 MBps) Copying: 181/1024 [MB] (42 MBps) Copying: 225/1024 [MB] (43 MBps) Copying: 271/1024 [MB] (46 MBps) Copying: 317/1024 [MB] (45 MBps) Copying: 364/1024 [MB] (47 MBps) Copying: 411/1024 [MB] (47 MBps) Copying: 461/1024 [MB] (49 MBps) Copying: 513/1024 [MB] (51 MBps) Copying: 559/1024 [MB] (45 MBps) Copying: 607/1024 [MB] (48 MBps) Copying: 654/1024 [MB] (46 MBps) Copying: 703/1024 [MB] (49 MBps) Copying: 750/1024 [MB] (47 MBps) Copying: 798/1024 [MB] (47 MBps) Copying: 849/1024 [MB] (50 MBps) Copying: 898/1024 [MB] (49 MBps) Copying: 946/1024 [MB] (48 MBps) Copying: 992/1024 [MB] (45 MBps) Copying: 1024/1024 [MB] (average 47 MBps)[2024-09-29 21:50:32.769941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.770035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:13.991 [2024-09-29 21:50:32.770061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:13.991 [2024-09-29 21:50:32.770084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.770125] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:13.991 [2024-09-29 21:50:32.775128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.775176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:13.991 [2024-09-29 21:50:32.775195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.977 ms 00:19:13.991 [2024-09-29 21:50:32.775211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.775638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.775666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:13.991 [2024-09-29 21:50:32.775683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:19:13.991 [2024-09-29 21:50:32.775698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.783823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.783850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:13.991 [2024-09-29 21:50:32.783860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.096 ms 00:19:13.991 [2024-09-29 21:50:32.783869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.790084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.790110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:13.991 [2024-09-29 21:50:32.790120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.198 ms 00:19:13.991 [2024-09-29 21:50:32.790127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.814611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.814643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:13.991 [2024-09-29 21:50:32.814654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.413 ms 00:19:13.991 [2024-09-29 21:50:32.814662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.828765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.829007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:13.991 [2024-09-29 21:50:32.829025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.083 ms 00:19:13.991 [2024-09-29 21:50:32.829033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.829157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.829168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:13.991 [2024-09-29 21:50:32.829177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:13.991 [2024-09-29 21:50:32.829185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.852453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.852483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:13.991 [2024-09-29 21:50:32.852494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.253 ms 00:19:13.991 [2024-09-29 21:50:32.852501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.875313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.875351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:13.991 [2024-09-29 21:50:32.875361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.795 ms 00:19:13.991 [2024-09-29 21:50:32.875368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.897122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.897151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:13.991 [2024-09-29 21:50:32.897160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.737 ms 00:19:13.991 [2024-09-29 21:50:32.897167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.919203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.991 [2024-09-29 21:50:32.919346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:13.991 [2024-09-29 21:50:32.919361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.996 ms 00:19:13.991 [2024-09-29 21:50:32.919368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.991 [2024-09-29 21:50:32.919407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:13.991 [2024-09-29 21:50:32.919423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:13.991 [2024-09-29 21:50:32.919635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.919997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:13.992 [2024-09-29 21:50:32.920200] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:13.992 [2024-09-29 21:50:32.920209] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc72902e-1a7b-4b64-8768-6432d06942bd 00:19:13.992 [2024-09-29 21:50:32.920216] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:13.992 [2024-09-29 21:50:32.920224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:13.992 [2024-09-29 21:50:32.920231] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:13.992 [2024-09-29 21:50:32.920238] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:13.992 [2024-09-29 21:50:32.920245] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:13.992 [2024-09-29 21:50:32.920256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:13.992 [2024-09-29 21:50:32.920263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:13.992 [2024-09-29 21:50:32.920271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:13.992 [2024-09-29 21:50:32.920278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:13.992 [2024-09-29 21:50:32.920286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.992 [2024-09-29 21:50:32.920300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:13.992 [2024-09-29 21:50:32.920308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:19:13.992 [2024-09-29 21:50:32.920316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.992 [2024-09-29 21:50:32.933812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.992 [2024-09-29 21:50:32.933975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:13.992 [2024-09-29 21:50:32.934032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.479 ms 00:19:13.992 [2024-09-29 21:50:32.934064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.992 [2024-09-29 21:50:32.934493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.992 [2024-09-29 21:50:32.934535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:13.992 [2024-09-29 21:50:32.934596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:19:13.993 [2024-09-29 21:50:32.934620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.993 [2024-09-29 21:50:32.963974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.993 [2024-09-29 21:50:32.964106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.993 [2024-09-29 21:50:32.964154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.993 [2024-09-29 21:50:32.964182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.993 [2024-09-29 21:50:32.964257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.993 [2024-09-29 21:50:32.964279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.993 [2024-09-29 21:50:32.964299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.993 [2024-09-29 21:50:32.964317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.993 [2024-09-29 21:50:32.964376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.993 [2024-09-29 21:50:32.964410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.993 [2024-09-29 21:50:32.964420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.993 [2024-09-29 21:50:32.964427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.993 [2024-09-29 21:50:32.964448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.993 [2024-09-29 21:50:32.964457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.993 [2024-09-29 21:50:32.964466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.993 [2024-09-29 21:50:32.964474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.252 [2024-09-29 21:50:33.043956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.252 [2024-09-29 21:50:33.044183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.252 [2024-09-29 21:50:33.044234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.252 [2024-09-29 21:50:33.044263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.252 [2024-09-29 21:50:33.109534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.252 [2024-09-29 21:50:33.109731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.252 [2024-09-29 21:50:33.109779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.252 [2024-09-29 21:50:33.109801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.252 [2024-09-29 21:50:33.109894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.252 [2024-09-29 21:50:33.109916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.252 [2024-09-29 21:50:33.109936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.252 [2024-09-29 21:50:33.109955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.252 [2024-09-29 21:50:33.110008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.252 [2024-09-29 21:50:33.110030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.252 [2024-09-29 21:50:33.110050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.252 [2024-09-29 21:50:33.110108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.252 [2024-09-29 21:50:33.110229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.252 [2024-09-29 21:50:33.110255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.252 [2024-09-29 21:50:33.110276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.252 [2024-09-29 21:50:33.110294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.252 [2024-09-29 21:50:33.110404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.252 [2024-09-29 21:50:33.110436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.252 [2024-09-29 21:50:33.110458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.252 [2024-09-29 21:50:33.110477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.252 [2024-09-29 21:50:33.110529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.252 [2024-09-29 21:50:33.110665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.252 [2024-09-29 21:50:33.110686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.252 [2024-09-29 21:50:33.110704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.252 [2024-09-29 21:50:33.110764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.252 [2024-09-29 21:50:33.110822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.252 [2024-09-29 21:50:33.110846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.252 [2024-09-29 21:50:33.110865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.252 [2024-09-29 21:50:33.111003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.052 ms, result 0 00:19:15.187 00:19:15.187 00:19:15.187 21:50:33 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:17.090 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:17.090 21:50:35 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:17.090 [2024-09-29 21:50:35.965283] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:17.090 [2024-09-29 21:50:35.965554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75470 ] 00:19:17.349 [2024-09-29 21:50:36.108150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.349 [2024-09-29 21:50:36.315107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.608 [2024-09-29 21:50:36.586212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.608 [2024-09-29 21:50:36.586516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.868 [2024-09-29 21:50:36.740852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.741067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.868 [2024-09-29 21:50:36.741136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:17.868 [2024-09-29 21:50:36.741168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.741235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.741261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.868 [2024-09-29 21:50:36.741282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:17.868 [2024-09-29 21:50:36.741433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.741540] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.868 [2024-09-29 21:50:36.742241] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.868 [2024-09-29 21:50:36.742283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.742292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.868 [2024-09-29 21:50:36.742301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:19:17.868 [2024-09-29 21:50:36.742309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.743677] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:17.868 [2024-09-29 21:50:36.756450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.756589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:17.868 [2024-09-29 21:50:36.756608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.774 ms 00:19:17.868 [2024-09-29 21:50:36.756616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.756693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.756704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:17.868 [2024-09-29 21:50:36.756712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:17.868 [2024-09-29 21:50:36.756720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.763115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.763145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.868 [2024-09-29 21:50:36.763155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.338 ms 00:19:17.868 [2024-09-29 21:50:36.763163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.763240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.763251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.868 [2024-09-29 21:50:36.763259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:17.868 [2024-09-29 21:50:36.763267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.763308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.763317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:17.868 [2024-09-29 21:50:36.763325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:17.868 [2024-09-29 21:50:36.763333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.763355] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.868 [2024-09-29 21:50:36.767002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.767030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.868 [2024-09-29 21:50:36.767040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.653 ms 00:19:17.868 [2024-09-29 21:50:36.767048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.767076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.767085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:17.868 [2024-09-29 21:50:36.767093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:17.868 [2024-09-29 21:50:36.767101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.767130] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:17.868 [2024-09-29 21:50:36.767153] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:17.868 [2024-09-29 21:50:36.767189] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:17.868 [2024-09-29 21:50:36.767205] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:17.868 [2024-09-29 21:50:36.767309] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:17.868 [2024-09-29 21:50:36.767320] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:17.868 [2024-09-29 21:50:36.767330] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:17.868 [2024-09-29 21:50:36.767343] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:17.868 [2024-09-29 21:50:36.767352] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:17.868 [2024-09-29 21:50:36.767361] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:17.868 [2024-09-29 21:50:36.767368] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:17.868 [2024-09-29 21:50:36.767376] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:17.868 [2024-09-29 21:50:36.767395] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:17.868 [2024-09-29 21:50:36.767403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.767411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:17.868 [2024-09-29 21:50:36.767420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:19:17.868 [2024-09-29 21:50:36.767428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.767510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.868 [2024-09-29 21:50:36.767521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:17.868 [2024-09-29 21:50:36.767529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:17.868 [2024-09-29 21:50:36.767536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.868 [2024-09-29 21:50:36.767649] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:17.868 [2024-09-29 21:50:36.767660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:17.868 [2024-09-29 21:50:36.767668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.868 [2024-09-29 21:50:36.767676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.868 [2024-09-29 21:50:36.767684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:17.868 [2024-09-29 21:50:36.767691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:17.868 [2024-09-29 21:50:36.767698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:17.868 [2024-09-29 21:50:36.767706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:17.868 [2024-09-29 21:50:36.767714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:17.868 [2024-09-29 21:50:36.767720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.868 [2024-09-29 21:50:36.767727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:17.868 [2024-09-29 21:50:36.767734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:17.868 [2024-09-29 21:50:36.767741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.868 [2024-09-29 21:50:36.767753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:17.868 [2024-09-29 21:50:36.767761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:17.868 [2024-09-29 21:50:36.767768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.868 [2024-09-29 21:50:36.767775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:17.868 [2024-09-29 21:50:36.767782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:17.868 [2024-09-29 21:50:36.767790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.868 [2024-09-29 21:50:36.767797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:17.868 [2024-09-29 21:50:36.767805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:17.868 [2024-09-29 21:50:36.767811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.868 [2024-09-29 21:50:36.767818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:17.868 [2024-09-29 21:50:36.767824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:17.868 [2024-09-29 21:50:36.767831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.868 [2024-09-29 21:50:36.767837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:17.868 [2024-09-29 21:50:36.767844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:17.868 [2024-09-29 21:50:36.767850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.869 [2024-09-29 21:50:36.767857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:17.869 [2024-09-29 21:50:36.767864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:17.869 [2024-09-29 21:50:36.767870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:17.869 [2024-09-29 21:50:36.767877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:17.869 [2024-09-29 21:50:36.767883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:17.869 [2024-09-29 21:50:36.767889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.869 [2024-09-29 21:50:36.767895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:17.869 [2024-09-29 21:50:36.767903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:17.869 [2024-09-29 21:50:36.767909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.869 [2024-09-29 21:50:36.767916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:17.869 [2024-09-29 21:50:36.767923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:17.869 [2024-09-29 21:50:36.767929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.869 [2024-09-29 21:50:36.767935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:17.869 [2024-09-29 21:50:36.767942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:17.869 [2024-09-29 21:50:36.767951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.869 [2024-09-29 21:50:36.767957] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:17.869 [2024-09-29 21:50:36.767965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:17.869 [2024-09-29 21:50:36.767974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.869 [2024-09-29 21:50:36.767981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.869 [2024-09-29 21:50:36.767988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:17.869 [2024-09-29 21:50:36.767995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:17.869 [2024-09-29 21:50:36.768001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:17.869 [2024-09-29 21:50:36.768010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:17.869 [2024-09-29 21:50:36.768017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:17.869 [2024-09-29 21:50:36.768024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:17.869 [2024-09-29 21:50:36.768032] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:17.869 [2024-09-29 21:50:36.768041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.869 [2024-09-29 21:50:36.768050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:17.869 [2024-09-29 21:50:36.768057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:17.869 [2024-09-29 21:50:36.768065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:17.869 [2024-09-29 21:50:36.768072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:17.869 [2024-09-29 21:50:36.768079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:17.869 [2024-09-29 21:50:36.768086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:17.869 [2024-09-29 21:50:36.768093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:17.869 [2024-09-29 21:50:36.768100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:17.869 [2024-09-29 21:50:36.768108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:17.869 [2024-09-29 21:50:36.768115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:17.869 [2024-09-29 21:50:36.768122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:17.869 [2024-09-29 21:50:36.768129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:17.869 [2024-09-29 21:50:36.768136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:17.869 [2024-09-29 21:50:36.768143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:17.869 [2024-09-29 21:50:36.768150] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:17.869 [2024-09-29 21:50:36.768161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.869 [2024-09-29 21:50:36.768170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:17.869 [2024-09-29 21:50:36.768177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:17.869 [2024-09-29 21:50:36.768185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:17.869 [2024-09-29 21:50:36.768192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:17.869 [2024-09-29 21:50:36.768200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.869 [2024-09-29 21:50:36.768207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:17.869 [2024-09-29 21:50:36.768215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:19:17.869 [2024-09-29 21:50:36.768222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.869 [2024-09-29 21:50:36.811863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.869 [2024-09-29 21:50:36.811910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.869 [2024-09-29 21:50:36.811923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.595 ms 00:19:17.869 [2024-09-29 21:50:36.811932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.869 [2024-09-29 21:50:36.812026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.869 [2024-09-29 21:50:36.812035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:17.869 [2024-09-29 21:50:36.812044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:17.869 [2024-09-29 21:50:36.812051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.869 [2024-09-29 21:50:36.844311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.869 [2024-09-29 21:50:36.844347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.869 [2024-09-29 21:50:36.844361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.196 ms 00:19:17.869 [2024-09-29 21:50:36.844369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.869 [2024-09-29 21:50:36.844416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.869 [2024-09-29 21:50:36.844425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.869 [2024-09-29 21:50:36.844434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:17.869 [2024-09-29 21:50:36.844442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.869 [2024-09-29 21:50:36.844882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.869 [2024-09-29 21:50:36.844904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.869 [2024-09-29 21:50:36.844913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:19:17.869 [2024-09-29 21:50:36.844925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.869 [2024-09-29 21:50:36.845053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.869 [2024-09-29 21:50:36.845063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.869 [2024-09-29 21:50:36.845072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:17.869 [2024-09-29 21:50:36.845080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.858315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.858343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.129 [2024-09-29 21:50:36.858354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.215 ms 00:19:18.129 [2024-09-29 21:50:36.858362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.871047] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:18.129 [2024-09-29 21:50:36.871079] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:18.129 [2024-09-29 21:50:36.871091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.871099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:18.129 [2024-09-29 21:50:36.871108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.620 ms 00:19:18.129 [2024-09-29 21:50:36.871115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.895476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.895516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:18.129 [2024-09-29 21:50:36.895528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.321 ms 00:19:18.129 [2024-09-29 21:50:36.895536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.907018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.907046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:18.129 [2024-09-29 21:50:36.907056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.446 ms 00:19:18.129 [2024-09-29 21:50:36.907064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.916392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.916416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:18.129 [2024-09-29 21:50:36.916424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.283 ms 00:19:18.129 [2024-09-29 21:50:36.916429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.916923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.916939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:18.129 [2024-09-29 21:50:36.916947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:19:18.129 [2024-09-29 21:50:36.916952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.965181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.965227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:18.129 [2024-09-29 21:50:36.965239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.212 ms 00:19:18.129 [2024-09-29 21:50:36.965246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.973782] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:18.129 [2024-09-29 21:50:36.976343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.976524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:18.129 [2024-09-29 21:50:36.976540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.051 ms 00:19:18.129 [2024-09-29 21:50:36.976552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.976651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.976659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:18.129 [2024-09-29 21:50:36.976668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:18.129 [2024-09-29 21:50:36.976675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.976738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.976747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:18.129 [2024-09-29 21:50:36.976754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:18.129 [2024-09-29 21:50:36.976761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.976779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.976786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:18.129 [2024-09-29 21:50:36.976792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:18.129 [2024-09-29 21:50:36.976799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.976829] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:18.129 [2024-09-29 21:50:36.976837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.976844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:18.129 [2024-09-29 21:50:36.976854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:18.129 [2024-09-29 21:50:36.976862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.994985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.995017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:18.129 [2024-09-29 21:50:36.995028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.108 ms 00:19:18.129 [2024-09-29 21:50:36.995035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.995099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.129 [2024-09-29 21:50:36.995107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:18.129 [2024-09-29 21:50:36.995115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:18.129 [2024-09-29 21:50:36.995121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.129 [2024-09-29 21:50:36.996154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 254.915 ms, result 0 00:19:40.077  Copying: 50/1024 [MB] (50 MBps) Copying: 98/1024 [MB] (48 MBps) Copying: 147/1024 [MB] (48 MBps) Copying: 197/1024 [MB] (50 MBps) Copying: 242/1024 [MB] (44 MBps) Copying: 292/1024 [MB] (49 MBps) Copying: 339/1024 [MB] (47 MBps) Copying: 390/1024 [MB] (51 MBps) Copying: 439/1024 [MB] (48 MBps) Copying: 483/1024 [MB] (44 MBps) Copying: 528/1024 [MB] (44 MBps) Copying: 574/1024 [MB] (45 MBps) Copying: 619/1024 [MB] (44 MBps) Copying: 671/1024 [MB] (52 MBps) Copying: 723/1024 [MB] (52 MBps) Copying: 776/1024 [MB] (52 MBps) Copying: 828/1024 [MB] (52 MBps) Copying: 880/1024 [MB] (51 MBps) Copying: 925/1024 [MB] (45 MBps) Copying: 971/1024 [MB] (45 MBps) Copying: 1016/1024 [MB] (44 MBps) Copying: 1048548/1048576 [kB] (8020 kBps) Copying: 1024/1024 [MB] (average 46 MBps)[2024-09-29 21:50:59.052491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.077 [2024-09-29 21:50:59.052637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:40.077 [2024-09-29 21:50:59.052707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.077 [2024-09-29 21:50:59.052733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.077 [2024-09-29 21:50:59.053696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.077 [2024-09-29 21:50:59.058500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.077 [2024-09-29 21:50:59.058612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:40.077 [2024-09-29 21:50:59.058673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.688 ms 00:19:40.077 [2024-09-29 21:50:59.058702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.071018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.071122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:40.336 [2024-09-29 21:50:59.071182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.184 ms 00:19:40.336 [2024-09-29 21:50:59.071206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.088819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.088937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:40.336 [2024-09-29 21:50:59.088995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.585 ms 00:19:40.336 [2024-09-29 21:50:59.089017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.095168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.095259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:40.336 [2024-09-29 21:50:59.095306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:19:40.336 [2024-09-29 21:50:59.095327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.119681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.119793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:40.336 [2024-09-29 21:50:59.119844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.281 ms 00:19:40.336 [2024-09-29 21:50:59.119869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.134071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.134173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:40.336 [2024-09-29 21:50:59.134230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.163 ms 00:19:40.336 [2024-09-29 21:50:59.134253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.186478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.186592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:40.336 [2024-09-29 21:50:59.186641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.184 ms 00:19:40.336 [2024-09-29 21:50:59.186665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.210040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.210140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:40.336 [2024-09-29 21:50:59.210199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.348 ms 00:19:40.336 [2024-09-29 21:50:59.210221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.232893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.232919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:40.336 [2024-09-29 21:50:59.232929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.635 ms 00:19:40.336 [2024-09-29 21:50:59.232937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.255092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.255122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:40.336 [2024-09-29 21:50:59.255132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.125 ms 00:19:40.336 [2024-09-29 21:50:59.255140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.277600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-09-29 21:50:59.277630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:40.336 [2024-09-29 21:50:59.277641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.408 ms 00:19:40.336 [2024-09-29 21:50:59.277648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-09-29 21:50:59.277677] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:40.336 [2024-09-29 21:50:59.277691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118784 / 261120 wr_cnt: 1 state: open 00:19:40.336 [2024-09-29 21:50:59.277701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:40.336 [2024-09-29 21:50:59.277710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:40.336 [2024-09-29 21:50:59.277718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:40.336 [2024-09-29 21:50:59.277727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:40.336 [2024-09-29 21:50:59.277735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:40.336 [2024-09-29 21:50:59.277743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.277995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:40.337 [2024-09-29 21:50:59.278498] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:40.337 [2024-09-29 21:50:59.278506] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc72902e-1a7b-4b64-8768-6432d06942bd 00:19:40.337 [2024-09-29 21:50:59.278518] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118784 00:19:40.337 [2024-09-29 21:50:59.278526] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119744 00:19:40.337 [2024-09-29 21:50:59.278557] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118784 00:19:40.337 [2024-09-29 21:50:59.278567] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:19:40.337 [2024-09-29 21:50:59.278574] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:40.337 [2024-09-29 21:50:59.278581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:40.338 [2024-09-29 21:50:59.278589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:40.338 [2024-09-29 21:50:59.278596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:40.338 [2024-09-29 21:50:59.278602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:40.338 [2024-09-29 21:50:59.278609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.338 [2024-09-29 21:50:59.278623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:40.338 [2024-09-29 21:50:59.278631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:19:40.338 [2024-09-29 21:50:59.278639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.338 [2024-09-29 21:50:59.291438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.338 [2024-09-29 21:50:59.291466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.338 [2024-09-29 21:50:59.291477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.785 ms 00:19:40.338 [2024-09-29 21:50:59.291486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.338 [2024-09-29 21:50:59.291855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.338 [2024-09-29 21:50:59.291865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.338 [2024-09-29 21:50:59.291878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:19:40.338 [2024-09-29 21:50:59.291885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.321303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.321337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.596 [2024-09-29 21:50:59.321347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.321355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.321431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.321440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.596 [2024-09-29 21:50:59.321452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.321459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.321515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.321525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.596 [2024-09-29 21:50:59.321533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.321540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.321557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.321582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.596 [2024-09-29 21:50:59.321590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.321600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.401781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.401826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.596 [2024-09-29 21:50:59.401838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.401846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.466994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.467040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.596 [2024-09-29 21:50:59.467051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.467063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.467138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.467149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.596 [2024-09-29 21:50:59.467157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.467164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.467198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.467207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.596 [2024-09-29 21:50:59.467215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.467223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.467314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.467325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.596 [2024-09-29 21:50:59.467333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.467341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.467372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.467382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.596 [2024-09-29 21:50:59.467408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.467416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.467456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.467466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.596 [2024-09-29 21:50:59.467474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.467482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.467526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.596 [2024-09-29 21:50:59.467537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.596 [2024-09-29 21:50:59.467545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.596 [2024-09-29 21:50:59.467552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-09-29 21:50:59.467672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.237 ms, result 0 00:19:43.127 00:19:43.127 00:19:43.127 21:51:01 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:43.127 [2024-09-29 21:51:02.053156] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:43.127 [2024-09-29 21:51:02.053280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75728 ] 00:19:43.386 [2024-09-29 21:51:02.201656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.644 [2024-09-29 21:51:02.408846] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.904 [2024-09-29 21:51:02.678687] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.904 [2024-09-29 21:51:02.678752] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:43.904 [2024-09-29 21:51:02.834035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.834089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:43.904 [2024-09-29 21:51:02.834103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.904 [2024-09-29 21:51:02.834116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.834159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.834169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.904 [2024-09-29 21:51:02.834178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:43.904 [2024-09-29 21:51:02.834201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.834221] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:43.904 [2024-09-29 21:51:02.834879] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:43.904 [2024-09-29 21:51:02.834903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.834911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.904 [2024-09-29 21:51:02.834921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:19:43.904 [2024-09-29 21:51:02.834929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.836280] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:43.904 [2024-09-29 21:51:02.849061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.849093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:43.904 [2024-09-29 21:51:02.849106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.783 ms 00:19:43.904 [2024-09-29 21:51:02.849114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.849166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.849176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:43.904 [2024-09-29 21:51:02.849184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:43.904 [2024-09-29 21:51:02.849192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.855668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.855697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.904 [2024-09-29 21:51:02.855708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.427 ms 00:19:43.904 [2024-09-29 21:51:02.855716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.855788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.855797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.904 [2024-09-29 21:51:02.855806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:43.904 [2024-09-29 21:51:02.855814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.855867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.855877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:43.904 [2024-09-29 21:51:02.855885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:43.904 [2024-09-29 21:51:02.855893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.855915] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.904 [2024-09-29 21:51:02.859485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.859513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.904 [2024-09-29 21:51:02.859522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.576 ms 00:19:43.904 [2024-09-29 21:51:02.859529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.859562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.859571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:43.904 [2024-09-29 21:51:02.859580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:43.904 [2024-09-29 21:51:02.859587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.859610] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:43.904 [2024-09-29 21:51:02.859628] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:43.904 [2024-09-29 21:51:02.859664] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:43.904 [2024-09-29 21:51:02.859680] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:43.904 [2024-09-29 21:51:02.859785] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:43.904 [2024-09-29 21:51:02.859797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:43.904 [2024-09-29 21:51:02.859807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:43.904 [2024-09-29 21:51:02.859820] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:43.904 [2024-09-29 21:51:02.859830] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:43.904 [2024-09-29 21:51:02.859837] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:43.904 [2024-09-29 21:51:02.859845] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:43.904 [2024-09-29 21:51:02.859852] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:43.904 [2024-09-29 21:51:02.859860] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:43.904 [2024-09-29 21:51:02.859868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.859875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:43.904 [2024-09-29 21:51:02.859883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:19:43.904 [2024-09-29 21:51:02.859890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.904 [2024-09-29 21:51:02.859972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.904 [2024-09-29 21:51:02.859983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:43.904 [2024-09-29 21:51:02.859991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:43.904 [2024-09-29 21:51:02.859999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.905 [2024-09-29 21:51:02.860112] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:43.905 [2024-09-29 21:51:02.860123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:43.905 [2024-09-29 21:51:02.860132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.905 [2024-09-29 21:51:02.860139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:43.905 [2024-09-29 21:51:02.860154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:43.905 [2024-09-29 21:51:02.860170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:43.905 [2024-09-29 21:51:02.860177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.905 [2024-09-29 21:51:02.860191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:43.905 [2024-09-29 21:51:02.860198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:43.905 [2024-09-29 21:51:02.860204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:43.905 [2024-09-29 21:51:02.860218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:43.905 [2024-09-29 21:51:02.860225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:43.905 [2024-09-29 21:51:02.860232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:43.905 [2024-09-29 21:51:02.860245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:43.905 [2024-09-29 21:51:02.860252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:43.905 [2024-09-29 21:51:02.860270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.905 [2024-09-29 21:51:02.860283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:43.905 [2024-09-29 21:51:02.860289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.905 [2024-09-29 21:51:02.860302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:43.905 [2024-09-29 21:51:02.860310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.905 [2024-09-29 21:51:02.860323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:43.905 [2024-09-29 21:51:02.860329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:43.905 [2024-09-29 21:51:02.860343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:43.905 [2024-09-29 21:51:02.860350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.905 [2024-09-29 21:51:02.860363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:43.905 [2024-09-29 21:51:02.860369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:43.905 [2024-09-29 21:51:02.860375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:43.905 [2024-09-29 21:51:02.860382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:43.905 [2024-09-29 21:51:02.860407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:43.905 [2024-09-29 21:51:02.860414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:43.905 [2024-09-29 21:51:02.860429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:43.905 [2024-09-29 21:51:02.860437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860444] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:43.905 [2024-09-29 21:51:02.860452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:43.905 [2024-09-29 21:51:02.860462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:43.905 [2024-09-29 21:51:02.860471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:43.905 [2024-09-29 21:51:02.860479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:43.905 [2024-09-29 21:51:02.860486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:43.905 [2024-09-29 21:51:02.860493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:43.905 [2024-09-29 21:51:02.860500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:43.905 [2024-09-29 21:51:02.860506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:43.905 [2024-09-29 21:51:02.860514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:43.905 [2024-09-29 21:51:02.860524] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:43.905 [2024-09-29 21:51:02.860533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.905 [2024-09-29 21:51:02.860542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:43.905 [2024-09-29 21:51:02.860550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:43.905 [2024-09-29 21:51:02.860564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:43.905 [2024-09-29 21:51:02.860572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:43.905 [2024-09-29 21:51:02.860579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:43.905 [2024-09-29 21:51:02.860586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:43.905 [2024-09-29 21:51:02.860593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:43.905 [2024-09-29 21:51:02.860601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:43.905 [2024-09-29 21:51:02.860608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:43.905 [2024-09-29 21:51:02.860615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:43.905 [2024-09-29 21:51:02.860621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:43.905 [2024-09-29 21:51:02.860629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:43.905 [2024-09-29 21:51:02.860636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:43.905 [2024-09-29 21:51:02.860644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:43.905 [2024-09-29 21:51:02.860651] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:43.905 [2024-09-29 21:51:02.860659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:43.905 [2024-09-29 21:51:02.860668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:43.905 [2024-09-29 21:51:02.860676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:43.905 [2024-09-29 21:51:02.860684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:43.905 [2024-09-29 21:51:02.860691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:43.905 [2024-09-29 21:51:02.860698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.905 [2024-09-29 21:51:02.860706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:43.905 [2024-09-29 21:51:02.860714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:19:43.905 [2024-09-29 21:51:02.860721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.164 [2024-09-29 21:51:02.898895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.164 [2024-09-29 21:51:02.898941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:44.164 [2024-09-29 21:51:02.898955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.130 ms 00:19:44.164 [2024-09-29 21:51:02.898963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.164 [2024-09-29 21:51:02.899066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.164 [2024-09-29 21:51:02.899076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:44.164 [2024-09-29 21:51:02.899084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:44.164 [2024-09-29 21:51:02.899092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.164 [2024-09-29 21:51:02.931422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.164 [2024-09-29 21:51:02.931460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:44.164 [2024-09-29 21:51:02.931473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.268 ms 00:19:44.164 [2024-09-29 21:51:02.931482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.164 [2024-09-29 21:51:02.931517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.164 [2024-09-29 21:51:02.931526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:44.164 [2024-09-29 21:51:02.931535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:44.164 [2024-09-29 21:51:02.931543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.164 [2024-09-29 21:51:02.931987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.164 [2024-09-29 21:51:02.932003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:44.165 [2024-09-29 21:51:02.932012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:19:44.165 [2024-09-29 21:51:02.932023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:02.932155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:02.932164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:44.165 [2024-09-29 21:51:02.932173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:44.165 [2024-09-29 21:51:02.932181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:02.945554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:02.945583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:44.165 [2024-09-29 21:51:02.945593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.352 ms 00:19:44.165 [2024-09-29 21:51:02.945601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:02.958301] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:44.165 [2024-09-29 21:51:02.958335] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:44.165 [2024-09-29 21:51:02.958348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:02.958357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:44.165 [2024-09-29 21:51:02.958366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.653 ms 00:19:44.165 [2024-09-29 21:51:02.958374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:02.982867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:02.982901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:44.165 [2024-09-29 21:51:02.982914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.439 ms 00:19:44.165 [2024-09-29 21:51:02.982923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:02.994083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:02.994284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:44.165 [2024-09-29 21:51:02.994302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.118 ms 00:19:44.165 [2024-09-29 21:51:02.994311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.005375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.005415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:44.165 [2024-09-29 21:51:03.005425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.033 ms 00:19:44.165 [2024-09-29 21:51:03.005433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.006053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.006080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:44.165 [2024-09-29 21:51:03.006089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:44.165 [2024-09-29 21:51:03.006097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.064565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.064805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:44.165 [2024-09-29 21:51:03.064825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.450 ms 00:19:44.165 [2024-09-29 21:51:03.064834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.075685] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:44.165 [2024-09-29 21:51:03.078553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.078580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:44.165 [2024-09-29 21:51:03.078594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.610 ms 00:19:44.165 [2024-09-29 21:51:03.078607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.078707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.078718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:44.165 [2024-09-29 21:51:03.078728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:44.165 [2024-09-29 21:51:03.078735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.080332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.080362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:44.165 [2024-09-29 21:51:03.080373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.557 ms 00:19:44.165 [2024-09-29 21:51:03.080381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.080423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.080432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:44.165 [2024-09-29 21:51:03.080442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:44.165 [2024-09-29 21:51:03.080450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.080486] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:44.165 [2024-09-29 21:51:03.080497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.080504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:44.165 [2024-09-29 21:51:03.080516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:44.165 [2024-09-29 21:51:03.080524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.104350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.104383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:44.165 [2024-09-29 21:51:03.104405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.809 ms 00:19:44.165 [2024-09-29 21:51:03.104414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.104486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.165 [2024-09-29 21:51:03.104497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:44.165 [2024-09-29 21:51:03.104506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:44.165 [2024-09-29 21:51:03.104514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.165 [2024-09-29 21:51:03.105539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 271.019 ms, result 0 00:20:06.405  Copying: 43/1024 [MB] (43 MBps) Copying: 92/1024 [MB] (49 MBps) Copying: 141/1024 [MB] (48 MBps) Copying: 187/1024 [MB] (45 MBps) Copying: 231/1024 [MB] (44 MBps) Copying: 275/1024 [MB] (44 MBps) Copying: 317/1024 [MB] (41 MBps) Copying: 367/1024 [MB] (49 MBps) Copying: 414/1024 [MB] (47 MBps) Copying: 462/1024 [MB] (48 MBps) Copying: 512/1024 [MB] (49 MBps) Copying: 560/1024 [MB] (48 MBps) Copying: 604/1024 [MB] (43 MBps) Copying: 653/1024 [MB] (48 MBps) Copying: 699/1024 [MB] (46 MBps) Copying: 745/1024 [MB] (45 MBps) Copying: 793/1024 [MB] (47 MBps) Copying: 841/1024 [MB] (48 MBps) Copying: 889/1024 [MB] (47 MBps) Copying: 939/1024 [MB] (50 MBps) Copying: 988/1024 [MB] (48 MBps) Copying: 1024/1024 [MB] (average 47 MBps)[2024-09-29 21:51:25.361325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.405 [2024-09-29 21:51:25.361438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:06.405 [2024-09-29 21:51:25.361459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.405 [2024-09-29 21:51:25.361472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.405 [2024-09-29 21:51:25.361504] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.405 [2024-09-29 21:51:25.365620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.405 [2024-09-29 21:51:25.365662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:06.405 [2024-09-29 21:51:25.365677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.095 ms 00:20:06.406 [2024-09-29 21:51:25.365696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.406 [2024-09-29 21:51:25.368244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.406 [2024-09-29 21:51:25.368280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:06.406 [2024-09-29 21:51:25.368293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.515 ms 00:20:06.406 [2024-09-29 21:51:25.368303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.406 [2024-09-29 21:51:25.373820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.406 [2024-09-29 21:51:25.373855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:06.406 [2024-09-29 21:51:25.373868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.498 ms 00:20:06.406 [2024-09-29 21:51:25.373878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.406 [2024-09-29 21:51:25.381190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.406 [2024-09-29 21:51:25.381424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:06.406 [2024-09-29 21:51:25.381443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.271 ms 00:20:06.406 [2024-09-29 21:51:25.381455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.665 [2024-09-29 21:51:25.406096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.665 [2024-09-29 21:51:25.406129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:06.665 [2024-09-29 21:51:25.406141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.595 ms 00:20:06.665 [2024-09-29 21:51:25.406148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.665 [2024-09-29 21:51:25.420544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.665 [2024-09-29 21:51:25.420704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:06.665 [2024-09-29 21:51:25.420721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.362 ms 00:20:06.665 [2024-09-29 21:51:25.420730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.665 [2024-09-29 21:51:25.479722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.666 [2024-09-29 21:51:25.479911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:06.666 [2024-09-29 21:51:25.479940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.955 ms 00:20:06.666 [2024-09-29 21:51:25.479949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.666 [2024-09-29 21:51:25.503720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.666 [2024-09-29 21:51:25.503756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:06.666 [2024-09-29 21:51:25.503768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.752 ms 00:20:06.666 [2024-09-29 21:51:25.503776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.666 [2024-09-29 21:51:25.526642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.666 [2024-09-29 21:51:25.526793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:06.666 [2024-09-29 21:51:25.526808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.834 ms 00:20:06.666 [2024-09-29 21:51:25.526816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.666 [2024-09-29 21:51:25.549036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.666 [2024-09-29 21:51:25.549067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:06.666 [2024-09-29 21:51:25.549079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.192 ms 00:20:06.666 [2024-09-29 21:51:25.549087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.666 [2024-09-29 21:51:25.571761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.666 [2024-09-29 21:51:25.571790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:06.666 [2024-09-29 21:51:25.571800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.620 ms 00:20:06.666 [2024-09-29 21:51:25.571808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.666 [2024-09-29 21:51:25.571839] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:06.666 [2024-09-29 21:51:25.571853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:20:06.666 [2024-09-29 21:51:25.571863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.571994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:06.666 [2024-09-29 21:51:25.572410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:06.667 [2024-09-29 21:51:25.572679] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:06.667 [2024-09-29 21:51:25.572687] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc72902e-1a7b-4b64-8768-6432d06942bd 00:20:06.667 [2024-09-29 21:51:25.572699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:20:06.667 [2024-09-29 21:51:25.572706] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13248 00:20:06.667 [2024-09-29 21:51:25.572714] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12288 00:20:06.667 [2024-09-29 21:51:25.572723] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0781 00:20:06.667 [2024-09-29 21:51:25.572730] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:06.667 [2024-09-29 21:51:25.572738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:06.667 [2024-09-29 21:51:25.572746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:06.667 [2024-09-29 21:51:25.572753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:06.667 [2024-09-29 21:51:25.572760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:06.667 [2024-09-29 21:51:25.572768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.667 [2024-09-29 21:51:25.572779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:06.667 [2024-09-29 21:51:25.572804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:20:06.667 [2024-09-29 21:51:25.572812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.667 [2024-09-29 21:51:25.585776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.667 [2024-09-29 21:51:25.585805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:06.667 [2024-09-29 21:51:25.585816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.947 ms 00:20:06.667 [2024-09-29 21:51:25.585824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.667 [2024-09-29 21:51:25.586198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.667 [2024-09-29 21:51:25.586216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:06.667 [2024-09-29 21:51:25.586230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:20:06.667 [2024-09-29 21:51:25.586238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.667 [2024-09-29 21:51:25.615947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.667 [2024-09-29 21:51:25.615983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.667 [2024-09-29 21:51:25.615994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.667 [2024-09-29 21:51:25.616002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.667 [2024-09-29 21:51:25.616063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.667 [2024-09-29 21:51:25.616072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.667 [2024-09-29 21:51:25.616079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.667 [2024-09-29 21:51:25.616091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.667 [2024-09-29 21:51:25.616151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.667 [2024-09-29 21:51:25.616160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.667 [2024-09-29 21:51:25.616169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.667 [2024-09-29 21:51:25.616176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.667 [2024-09-29 21:51:25.616192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.667 [2024-09-29 21:51:25.616201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.667 [2024-09-29 21:51:25.616209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.667 [2024-09-29 21:51:25.616216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.926 [2024-09-29 21:51:25.698225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.926 [2024-09-29 21:51:25.698274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.926 [2024-09-29 21:51:25.698287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.926 [2024-09-29 21:51:25.698296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.926 [2024-09-29 21:51:25.765278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.926 [2024-09-29 21:51:25.765335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.926 [2024-09-29 21:51:25.765347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.926 [2024-09-29 21:51:25.765359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.926 [2024-09-29 21:51:25.765452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.926 [2024-09-29 21:51:25.765462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.926 [2024-09-29 21:51:25.765471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.926 [2024-09-29 21:51:25.765479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.926 [2024-09-29 21:51:25.765515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.926 [2024-09-29 21:51:25.765524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.926 [2024-09-29 21:51:25.765533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.926 [2024-09-29 21:51:25.765541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.926 [2024-09-29 21:51:25.765634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.926 [2024-09-29 21:51:25.765645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.926 [2024-09-29 21:51:25.765653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.926 [2024-09-29 21:51:25.765661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.926 [2024-09-29 21:51:25.765693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.926 [2024-09-29 21:51:25.765703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:06.926 [2024-09-29 21:51:25.765710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.926 [2024-09-29 21:51:25.765718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.926 [2024-09-29 21:51:25.765762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.926 [2024-09-29 21:51:25.765772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.926 [2024-09-29 21:51:25.765781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.926 [2024-09-29 21:51:25.765789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.926 [2024-09-29 21:51:25.765837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.926 [2024-09-29 21:51:25.765854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.926 [2024-09-29 21:51:25.765863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.926 [2024-09-29 21:51:25.765872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.926 [2024-09-29 21:51:25.765995] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.648 ms, result 0 00:20:07.861 00:20:07.861 00:20:07.861 21:51:26 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:09.763 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74712 00:20:09.763 21:51:28 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74712 ']' 00:20:09.763 21:51:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74712 00:20:09.763 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74712) - No such process 00:20:09.763 Process with pid 74712 is not found 00:20:09.763 21:51:28 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74712 is not found' 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:09.763 Remove shared memory files 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:09.763 21:51:28 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:09.763 00:20:09.763 real 2m3.774s 00:20:09.763 user 1m52.998s 00:20:09.763 sys 0m12.106s 00:20:09.763 21:51:28 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:09.763 21:51:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:09.763 ************************************ 00:20:09.763 END TEST ftl_restore 00:20:09.763 ************************************ 00:20:09.763 21:51:28 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:09.763 21:51:28 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:09.763 21:51:28 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:09.763 21:51:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:09.763 ************************************ 00:20:09.763 START TEST ftl_dirty_shutdown 00:20:09.763 ************************************ 00:20:09.763 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:10.021 * Looking for test storage... 00:20:10.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:10.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.021 --rc genhtml_branch_coverage=1 00:20:10.021 --rc genhtml_function_coverage=1 00:20:10.021 --rc genhtml_legend=1 00:20:10.021 --rc geninfo_all_blocks=1 00:20:10.021 --rc geninfo_unexecuted_blocks=1 00:20:10.021 00:20:10.021 ' 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:10.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.021 --rc genhtml_branch_coverage=1 00:20:10.021 --rc genhtml_function_coverage=1 00:20:10.021 --rc genhtml_legend=1 00:20:10.021 --rc geninfo_all_blocks=1 00:20:10.021 --rc geninfo_unexecuted_blocks=1 00:20:10.021 00:20:10.021 ' 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:10.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.021 --rc genhtml_branch_coverage=1 00:20:10.021 --rc genhtml_function_coverage=1 00:20:10.021 --rc genhtml_legend=1 00:20:10.021 --rc geninfo_all_blocks=1 00:20:10.021 --rc geninfo_unexecuted_blocks=1 00:20:10.021 00:20:10.021 ' 00:20:10.021 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:10.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.022 --rc genhtml_branch_coverage=1 00:20:10.022 --rc genhtml_function_coverage=1 00:20:10.022 --rc genhtml_legend=1 00:20:10.022 --rc geninfo_all_blocks=1 00:20:10.022 --rc geninfo_unexecuted_blocks=1 00:20:10.022 00:20:10.022 ' 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76077 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76077 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 76077 ']' 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.022 21:51:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:10.022 [2024-09-29 21:51:28.952660] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:10.022 [2024-09-29 21:51:28.952795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76077 ] 00:20:10.279 [2024-09-29 21:51:29.100878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.536 [2024-09-29 21:51:29.317226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.101 21:51:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.101 21:51:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:20:11.101 21:51:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:11.101 21:51:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:11.101 21:51:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:11.101 21:51:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:11.101 21:51:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:11.101 21:51:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:11.357 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:11.357 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:11.357 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:11.357 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:11.357 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:11.357 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:11.357 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:11.357 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:11.614 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:11.614 { 00:20:11.614 "name": "nvme0n1", 00:20:11.614 "aliases": [ 00:20:11.614 "5a1a60bb-090b-4ef9-9953-50f536bd5ae0" 00:20:11.614 ], 00:20:11.614 "product_name": "NVMe disk", 00:20:11.614 "block_size": 4096, 00:20:11.614 "num_blocks": 1310720, 00:20:11.614 "uuid": "5a1a60bb-090b-4ef9-9953-50f536bd5ae0", 00:20:11.614 "numa_id": -1, 00:20:11.614 "assigned_rate_limits": { 00:20:11.614 "rw_ios_per_sec": 0, 00:20:11.614 "rw_mbytes_per_sec": 0, 00:20:11.614 "r_mbytes_per_sec": 0, 00:20:11.614 "w_mbytes_per_sec": 0 00:20:11.614 }, 00:20:11.614 "claimed": true, 00:20:11.614 "claim_type": "read_many_write_one", 00:20:11.614 "zoned": false, 00:20:11.614 "supported_io_types": { 00:20:11.614 "read": true, 00:20:11.614 "write": true, 00:20:11.614 "unmap": true, 00:20:11.614 "flush": true, 00:20:11.614 "reset": true, 00:20:11.614 "nvme_admin": true, 00:20:11.614 "nvme_io": true, 00:20:11.614 "nvme_io_md": false, 00:20:11.614 "write_zeroes": true, 00:20:11.614 "zcopy": false, 00:20:11.614 "get_zone_info": false, 00:20:11.614 "zone_management": false, 00:20:11.615 "zone_append": false, 00:20:11.615 "compare": true, 00:20:11.615 "compare_and_write": false, 00:20:11.615 "abort": true, 00:20:11.615 "seek_hole": false, 00:20:11.615 "seek_data": false, 00:20:11.615 "copy": true, 00:20:11.615 "nvme_iov_md": false 00:20:11.615 }, 00:20:11.615 "driver_specific": { 00:20:11.615 "nvme": [ 00:20:11.615 { 00:20:11.615 "pci_address": "0000:00:11.0", 00:20:11.615 "trid": { 00:20:11.615 "trtype": "PCIe", 00:20:11.615 "traddr": "0000:00:11.0" 00:20:11.615 }, 00:20:11.615 "ctrlr_data": { 00:20:11.615 "cntlid": 0, 00:20:11.615 "vendor_id": "0x1b36", 00:20:11.615 "model_number": "QEMU NVMe Ctrl", 00:20:11.615 "serial_number": "12341", 00:20:11.615 "firmware_revision": "8.0.0", 00:20:11.615 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:11.615 "oacs": { 00:20:11.615 "security": 0, 00:20:11.615 "format": 1, 00:20:11.615 "firmware": 0, 00:20:11.615 "ns_manage": 1 00:20:11.615 }, 00:20:11.615 "multi_ctrlr": false, 00:20:11.615 "ana_reporting": false 00:20:11.615 }, 00:20:11.615 "vs": { 00:20:11.615 "nvme_version": "1.4" 00:20:11.615 }, 00:20:11.615 "ns_data": { 00:20:11.615 "id": 1, 00:20:11.615 "can_share": false 00:20:11.615 } 00:20:11.615 } 00:20:11.615 ], 00:20:11.615 "mp_policy": "active_passive" 00:20:11.615 } 00:20:11.615 } 00:20:11.615 ]' 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:11.615 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:11.873 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=89b6503d-9088-4113-8709-afeab6cd0336 00:20:11.873 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:11.873 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 89b6503d-9088-4113-8709-afeab6cd0336 00:20:12.131 21:51:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:12.131 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=2d5b9253-2d5a-4dda-be47-f30b5df4ebd4 00:20:12.131 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2d5b9253-2d5a-4dda-be47-f30b5df4ebd4 00:20:12.388 21:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:12.389 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:12.647 { 00:20:12.647 "name": "200e98fb-16c0-4ddf-bf0e-5224bba78661", 00:20:12.647 "aliases": [ 00:20:12.647 "lvs/nvme0n1p0" 00:20:12.647 ], 00:20:12.647 "product_name": "Logical Volume", 00:20:12.647 "block_size": 4096, 00:20:12.647 "num_blocks": 26476544, 00:20:12.647 "uuid": "200e98fb-16c0-4ddf-bf0e-5224bba78661", 00:20:12.647 "assigned_rate_limits": { 00:20:12.647 "rw_ios_per_sec": 0, 00:20:12.647 "rw_mbytes_per_sec": 0, 00:20:12.647 "r_mbytes_per_sec": 0, 00:20:12.647 "w_mbytes_per_sec": 0 00:20:12.647 }, 00:20:12.647 "claimed": false, 00:20:12.647 "zoned": false, 00:20:12.647 "supported_io_types": { 00:20:12.647 "read": true, 00:20:12.647 "write": true, 00:20:12.647 "unmap": true, 00:20:12.647 "flush": false, 00:20:12.647 "reset": true, 00:20:12.647 "nvme_admin": false, 00:20:12.647 "nvme_io": false, 00:20:12.647 "nvme_io_md": false, 00:20:12.647 "write_zeroes": true, 00:20:12.647 "zcopy": false, 00:20:12.647 "get_zone_info": false, 00:20:12.647 "zone_management": false, 00:20:12.647 "zone_append": false, 00:20:12.647 "compare": false, 00:20:12.647 "compare_and_write": false, 00:20:12.647 "abort": false, 00:20:12.647 "seek_hole": true, 00:20:12.647 "seek_data": true, 00:20:12.647 "copy": false, 00:20:12.647 "nvme_iov_md": false 00:20:12.647 }, 00:20:12.647 "driver_specific": { 00:20:12.647 "lvol": { 00:20:12.647 "lvol_store_uuid": "2d5b9253-2d5a-4dda-be47-f30b5df4ebd4", 00:20:12.647 "base_bdev": "nvme0n1", 00:20:12.647 "thin_provision": true, 00:20:12.647 "num_allocated_clusters": 0, 00:20:12.647 "snapshot": false, 00:20:12.647 "clone": false, 00:20:12.647 "esnap_clone": false 00:20:12.647 } 00:20:12.647 } 00:20:12.647 } 00:20:12.647 ]' 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:12.647 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:12.905 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:12.905 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:12.905 21:51:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:12.905 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:12.905 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:12.905 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:12.905 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:12.905 21:51:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:13.163 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:13.163 { 00:20:13.163 "name": "200e98fb-16c0-4ddf-bf0e-5224bba78661", 00:20:13.163 "aliases": [ 00:20:13.163 "lvs/nvme0n1p0" 00:20:13.163 ], 00:20:13.163 "product_name": "Logical Volume", 00:20:13.163 "block_size": 4096, 00:20:13.163 "num_blocks": 26476544, 00:20:13.163 "uuid": "200e98fb-16c0-4ddf-bf0e-5224bba78661", 00:20:13.163 "assigned_rate_limits": { 00:20:13.163 "rw_ios_per_sec": 0, 00:20:13.163 "rw_mbytes_per_sec": 0, 00:20:13.163 "r_mbytes_per_sec": 0, 00:20:13.163 "w_mbytes_per_sec": 0 00:20:13.163 }, 00:20:13.163 "claimed": false, 00:20:13.163 "zoned": false, 00:20:13.163 "supported_io_types": { 00:20:13.163 "read": true, 00:20:13.163 "write": true, 00:20:13.163 "unmap": true, 00:20:13.163 "flush": false, 00:20:13.163 "reset": true, 00:20:13.163 "nvme_admin": false, 00:20:13.163 "nvme_io": false, 00:20:13.163 "nvme_io_md": false, 00:20:13.163 "write_zeroes": true, 00:20:13.163 "zcopy": false, 00:20:13.163 "get_zone_info": false, 00:20:13.163 "zone_management": false, 00:20:13.163 "zone_append": false, 00:20:13.163 "compare": false, 00:20:13.163 "compare_and_write": false, 00:20:13.163 "abort": false, 00:20:13.163 "seek_hole": true, 00:20:13.163 "seek_data": true, 00:20:13.163 "copy": false, 00:20:13.163 "nvme_iov_md": false 00:20:13.163 }, 00:20:13.163 "driver_specific": { 00:20:13.163 "lvol": { 00:20:13.163 "lvol_store_uuid": "2d5b9253-2d5a-4dda-be47-f30b5df4ebd4", 00:20:13.163 "base_bdev": "nvme0n1", 00:20:13.163 "thin_provision": true, 00:20:13.163 "num_allocated_clusters": 0, 00:20:13.163 "snapshot": false, 00:20:13.163 "clone": false, 00:20:13.163 "esnap_clone": false 00:20:13.163 } 00:20:13.163 } 00:20:13.163 } 00:20:13.163 ]' 00:20:13.163 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:13.163 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:13.163 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:13.163 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:13.163 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:13.163 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:13.163 21:51:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:13.163 21:51:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:13.420 21:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:13.420 21:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:13.420 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:13.420 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:13.420 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:13.420 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:13.420 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 200e98fb-16c0-4ddf-bf0e-5224bba78661 00:20:13.677 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:13.677 { 00:20:13.677 "name": "200e98fb-16c0-4ddf-bf0e-5224bba78661", 00:20:13.677 "aliases": [ 00:20:13.677 "lvs/nvme0n1p0" 00:20:13.677 ], 00:20:13.677 "product_name": "Logical Volume", 00:20:13.677 "block_size": 4096, 00:20:13.677 "num_blocks": 26476544, 00:20:13.677 "uuid": "200e98fb-16c0-4ddf-bf0e-5224bba78661", 00:20:13.677 "assigned_rate_limits": { 00:20:13.677 "rw_ios_per_sec": 0, 00:20:13.677 "rw_mbytes_per_sec": 0, 00:20:13.677 "r_mbytes_per_sec": 0, 00:20:13.677 "w_mbytes_per_sec": 0 00:20:13.677 }, 00:20:13.677 "claimed": false, 00:20:13.677 "zoned": false, 00:20:13.677 "supported_io_types": { 00:20:13.677 "read": true, 00:20:13.677 "write": true, 00:20:13.677 "unmap": true, 00:20:13.677 "flush": false, 00:20:13.677 "reset": true, 00:20:13.677 "nvme_admin": false, 00:20:13.677 "nvme_io": false, 00:20:13.677 "nvme_io_md": false, 00:20:13.677 "write_zeroes": true, 00:20:13.677 "zcopy": false, 00:20:13.677 "get_zone_info": false, 00:20:13.677 "zone_management": false, 00:20:13.677 "zone_append": false, 00:20:13.677 "compare": false, 00:20:13.677 "compare_and_write": false, 00:20:13.677 "abort": false, 00:20:13.678 "seek_hole": true, 00:20:13.678 "seek_data": true, 00:20:13.678 "copy": false, 00:20:13.678 "nvme_iov_md": false 00:20:13.678 }, 00:20:13.678 "driver_specific": { 00:20:13.678 "lvol": { 00:20:13.678 "lvol_store_uuid": "2d5b9253-2d5a-4dda-be47-f30b5df4ebd4", 00:20:13.678 "base_bdev": "nvme0n1", 00:20:13.678 "thin_provision": true, 00:20:13.678 "num_allocated_clusters": 0, 00:20:13.678 "snapshot": false, 00:20:13.678 "clone": false, 00:20:13.678 "esnap_clone": false 00:20:13.678 } 00:20:13.678 } 00:20:13.678 } 00:20:13.678 ]' 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 200e98fb-16c0-4ddf-bf0e-5224bba78661 --l2p_dram_limit 10' 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:13.678 21:51:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 200e98fb-16c0-4ddf-bf0e-5224bba78661 --l2p_dram_limit 10 -c nvc0n1p0 00:20:13.936 [2024-09-29 21:51:32.780168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.780238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:13.936 [2024-09-29 21:51:32.780252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.936 [2024-09-29 21:51:32.780259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.780312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.780321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.936 [2024-09-29 21:51:32.780331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:13.936 [2024-09-29 21:51:32.780353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.780377] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:13.936 [2024-09-29 21:51:32.781055] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:13.936 [2024-09-29 21:51:32.781080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.781087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.936 [2024-09-29 21:51:32.781095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:20:13.936 [2024-09-29 21:51:32.781103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.781141] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3d6e5af9-4fd9-4dfc-bde3-3b26bab28ab1 00:20:13.936 [2024-09-29 21:51:32.782539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.782573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:13.936 [2024-09-29 21:51:32.782582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:13.936 [2024-09-29 21:51:32.782591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.789650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.789684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.936 [2024-09-29 21:51:32.789692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.994 ms 00:20:13.936 [2024-09-29 21:51:32.789701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.789779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.789789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.936 [2024-09-29 21:51:32.789797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:13.936 [2024-09-29 21:51:32.789810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.789859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.789870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:13.936 [2024-09-29 21:51:32.789876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:13.936 [2024-09-29 21:51:32.789884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.789902] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.936 [2024-09-29 21:51:32.793236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.793264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.936 [2024-09-29 21:51:32.793275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:20:13.936 [2024-09-29 21:51:32.793282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.793312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.793320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:13.936 [2024-09-29 21:51:32.793328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:13.936 [2024-09-29 21:51:32.793336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.793351] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:13.936 [2024-09-29 21:51:32.793481] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:13.936 [2024-09-29 21:51:32.793497] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:13.936 [2024-09-29 21:51:32.793506] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:13.936 [2024-09-29 21:51:32.793518] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:13.936 [2024-09-29 21:51:32.793526] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:13.936 [2024-09-29 21:51:32.793535] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:13.936 [2024-09-29 21:51:32.793541] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:13.936 [2024-09-29 21:51:32.793549] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:13.936 [2024-09-29 21:51:32.793556] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:13.936 [2024-09-29 21:51:32.793564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.793575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:13.936 [2024-09-29 21:51:32.793583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:20:13.936 [2024-09-29 21:51:32.793590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.793659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.936 [2024-09-29 21:51:32.793667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:13.936 [2024-09-29 21:51:32.793676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:13.936 [2024-09-29 21:51:32.793682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.936 [2024-09-29 21:51:32.793759] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:13.936 [2024-09-29 21:51:32.793773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:13.937 [2024-09-29 21:51:32.793782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.937 [2024-09-29 21:51:32.793789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:13.937 [2024-09-29 21:51:32.793802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:13.937 [2024-09-29 21:51:32.793815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:13.937 [2024-09-29 21:51:32.793822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.937 [2024-09-29 21:51:32.793834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:13.937 [2024-09-29 21:51:32.793839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:13.937 [2024-09-29 21:51:32.793846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.937 [2024-09-29 21:51:32.793852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:13.937 [2024-09-29 21:51:32.793858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:13.937 [2024-09-29 21:51:32.793863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:13.937 [2024-09-29 21:51:32.793876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:13.937 [2024-09-29 21:51:32.793882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:13.937 [2024-09-29 21:51:32.793896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.937 [2024-09-29 21:51:32.793908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:13.937 [2024-09-29 21:51:32.793914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.937 [2024-09-29 21:51:32.793926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:13.937 [2024-09-29 21:51:32.793934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.937 [2024-09-29 21:51:32.793945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:13.937 [2024-09-29 21:51:32.793950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.937 [2024-09-29 21:51:32.793961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:13.937 [2024-09-29 21:51:32.793970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:13.937 [2024-09-29 21:51:32.793976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.937 [2024-09-29 21:51:32.793983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:13.937 [2024-09-29 21:51:32.793988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:13.937 [2024-09-29 21:51:32.793994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.937 [2024-09-29 21:51:32.793999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:13.937 [2024-09-29 21:51:32.794006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:13.937 [2024-09-29 21:51:32.794011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.937 [2024-09-29 21:51:32.794017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:13.937 [2024-09-29 21:51:32.794023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:13.937 [2024-09-29 21:51:32.794029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.937 [2024-09-29 21:51:32.794034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:13.937 [2024-09-29 21:51:32.794042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:13.937 [2024-09-29 21:51:32.794050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.937 [2024-09-29 21:51:32.794058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.937 [2024-09-29 21:51:32.794065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:13.937 [2024-09-29 21:51:32.794073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:13.937 [2024-09-29 21:51:32.794078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:13.937 [2024-09-29 21:51:32.794085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:13.937 [2024-09-29 21:51:32.794090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:13.937 [2024-09-29 21:51:32.794097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:13.937 [2024-09-29 21:51:32.794106] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:13.937 [2024-09-29 21:51:32.794115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.937 [2024-09-29 21:51:32.794124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:13.937 [2024-09-29 21:51:32.794133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:13.937 [2024-09-29 21:51:32.794138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:13.937 [2024-09-29 21:51:32.794145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:13.937 [2024-09-29 21:51:32.794150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:13.937 [2024-09-29 21:51:32.794157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:13.937 [2024-09-29 21:51:32.794163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:13.937 [2024-09-29 21:51:32.794170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:13.937 [2024-09-29 21:51:32.794176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:13.937 [2024-09-29 21:51:32.794184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:13.937 [2024-09-29 21:51:32.794189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:13.937 [2024-09-29 21:51:32.794203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:13.937 [2024-09-29 21:51:32.794209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:13.937 [2024-09-29 21:51:32.794217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:13.937 [2024-09-29 21:51:32.794222] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:13.937 [2024-09-29 21:51:32.794232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.937 [2024-09-29 21:51:32.794238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:13.937 [2024-09-29 21:51:32.794245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:13.937 [2024-09-29 21:51:32.794251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:13.937 [2024-09-29 21:51:32.794258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:13.937 [2024-09-29 21:51:32.794264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.937 [2024-09-29 21:51:32.794271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:13.937 [2024-09-29 21:51:32.794277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:20:13.937 [2024-09-29 21:51:32.794285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.937 [2024-09-29 21:51:32.794331] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:13.937 [2024-09-29 21:51:32.794348] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:16.464 [2024-09-29 21:51:34.877741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.464 [2024-09-29 21:51:34.877810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:16.464 [2024-09-29 21:51:34.877826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2083.397 ms 00:20:16.464 [2024-09-29 21:51:34.877837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.464 [2024-09-29 21:51:34.906710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.464 [2024-09-29 21:51:34.906769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.464 [2024-09-29 21:51:34.906785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.586 ms 00:20:16.464 [2024-09-29 21:51:34.906795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.464 [2024-09-29 21:51:34.906949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.464 [2024-09-29 21:51:34.906963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:16.464 [2024-09-29 21:51:34.906972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:16.464 [2024-09-29 21:51:34.906987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.464 [2024-09-29 21:51:34.947066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.464 [2024-09-29 21:51:34.947126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.464 [2024-09-29 21:51:34.947145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.024 ms 00:20:16.464 [2024-09-29 21:51:34.947155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.464 [2024-09-29 21:51:34.947216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.464 [2024-09-29 21:51:34.947228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.464 [2024-09-29 21:51:34.947237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:16.464 [2024-09-29 21:51:34.947256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:34.947728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:34.947761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.465 [2024-09-29 21:51:34.947771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:20:16.465 [2024-09-29 21:51:34.947784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:34.947903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:34.947915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.465 [2024-09-29 21:51:34.947924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:16.465 [2024-09-29 21:51:34.947936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:34.962643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:34.962676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.465 [2024-09-29 21:51:34.962688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.688 ms 00:20:16.465 [2024-09-29 21:51:34.962697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:34.974921] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:16.465 [2024-09-29 21:51:34.978169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:34.978213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:16.465 [2024-09-29 21:51:34.978228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.381 ms 00:20:16.465 [2024-09-29 21:51:34.978237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.033962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.034004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:16.465 [2024-09-29 21:51:35.034021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.695 ms 00:20:16.465 [2024-09-29 21:51:35.034030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.034229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.034242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:16.465 [2024-09-29 21:51:35.034255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:20:16.465 [2024-09-29 21:51:35.034263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.057897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.057936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:16.465 [2024-09-29 21:51:35.057948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.589 ms 00:20:16.465 [2024-09-29 21:51:35.057957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.080789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.080822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:16.465 [2024-09-29 21:51:35.080836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.793 ms 00:20:16.465 [2024-09-29 21:51:35.080843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.081426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.081451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:16.465 [2024-09-29 21:51:35.081462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:20:16.465 [2024-09-29 21:51:35.081471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.150708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.150751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:16.465 [2024-09-29 21:51:35.150770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.201 ms 00:20:16.465 [2024-09-29 21:51:35.150781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.176068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.176109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:16.465 [2024-09-29 21:51:35.176123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.211 ms 00:20:16.465 [2024-09-29 21:51:35.176131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.200594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.200634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:16.465 [2024-09-29 21:51:35.200646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.421 ms 00:20:16.465 [2024-09-29 21:51:35.200653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.224353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.224401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:16.465 [2024-09-29 21:51:35.224416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.656 ms 00:20:16.465 [2024-09-29 21:51:35.224424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.224467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.224477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:16.465 [2024-09-29 21:51:35.224494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:16.465 [2024-09-29 21:51:35.224501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.224587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.465 [2024-09-29 21:51:35.224598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:16.465 [2024-09-29 21:51:35.224608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:16.465 [2024-09-29 21:51:35.224615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.465 [2024-09-29 21:51:35.225976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2445.341 ms, result 0 00:20:16.465 { 00:20:16.465 "name": "ftl0", 00:20:16.465 "uuid": "3d6e5af9-4fd9-4dfc-bde3-3b26bab28ab1" 00:20:16.465 } 00:20:16.465 21:51:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:16.465 21:51:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:16.723 21:51:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:16.723 21:51:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:16.723 21:51:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:16.723 /dev/nbd0 00:20:16.723 21:51:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:16.723 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:16.723 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:20:16.723 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:16.723 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:16.723 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:16.724 1+0 records in 00:20:16.724 1+0 records out 00:20:16.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307938 s, 13.3 MB/s 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:20:16.724 21:51:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:16.982 [2024-09-29 21:51:35.752715] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:16.982 [2024-09-29 21:51:35.752841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76202 ] 00:20:16.982 [2024-09-29 21:51:35.893653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.240 [2024-09-29 21:51:36.079100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.944  Copying: 195/1024 [MB] (195 MBps) Copying: 391/1024 [MB] (195 MBps) Copying: 586/1024 [MB] (195 MBps) Copying: 815/1024 [MB] (229 MBps) Copying: 1024/1024 [MB] (average 212 MBps) 00:20:22.944 00:20:22.944 21:51:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:24.841 21:51:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:20:24.841 [2024-09-29 21:51:43.813756] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:24.841 [2024-09-29 21:51:43.814138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76289 ] 00:20:25.099 [2024-09-29 21:51:43.962869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.358 [2024-09-29 21:51:44.177782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.646  Copying: 32/1024 [MB] (32 MBps) Copying: 61/1024 [MB] (28 MBps) Copying: 89/1024 [MB] (27 MBps) Copying: 117/1024 [MB] (28 MBps) Copying: 146/1024 [MB] (28 MBps) Copying: 174/1024 [MB] (28 MBps) Copying: 203/1024 [MB] (28 MBps) Copying: 235/1024 [MB] (32 MBps) Copying: 269/1024 [MB] (34 MBps) Copying: 303/1024 [MB] (34 MBps) Copying: 331/1024 [MB] (28 MBps) Copying: 362/1024 [MB] (31 MBps) Copying: 392/1024 [MB] (29 MBps) Copying: 423/1024 [MB] (30 MBps) Copying: 456/1024 [MB] (33 MBps) Copying: 490/1024 [MB] (33 MBps) Copying: 519/1024 [MB] (29 MBps) Copying: 553/1024 [MB] (33 MBps) Copying: 586/1024 [MB] (33 MBps) Copying: 621/1024 [MB] (34 MBps) Copying: 653/1024 [MB] (32 MBps) Copying: 681/1024 [MB] (28 MBps) Copying: 713/1024 [MB] (32 MBps) Copying: 742/1024 [MB] (28 MBps) Copying: 775/1024 [MB] (33 MBps) Copying: 808/1024 [MB] (33 MBps) Copying: 840/1024 [MB] (31 MBps) Copying: 868/1024 [MB] (28 MBps) Copying: 897/1024 [MB] (29 MBps) Copying: 925/1024 [MB] (28 MBps) Copying: 955/1024 [MB] (29 MBps) Copying: 984/1024 [MB] (28 MBps) Copying: 1012/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 30 MBps) 00:20:59.646 00:20:59.646 21:52:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:20:59.646 21:52:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:20:59.904 21:52:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:59.905 [2024-09-29 21:52:18.878460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-09-29 21:52:18.878519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:59.905 [2024-09-29 21:52:18.878531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:59.905 [2024-09-29 21:52:18.878538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-09-29 21:52:18.878558] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.905 [2024-09-29 21:52:18.880642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-09-29 21:52:18.880805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:59.905 [2024-09-29 21:52:18.880823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.068 ms 00:20:59.905 [2024-09-29 21:52:18.880829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.905 [2024-09-29 21:52:18.882785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.905 [2024-09-29 21:52:18.882811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:59.905 [2024-09-29 21:52:18.882820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.924 ms 00:20:59.905 [2024-09-29 21:52:18.882826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:18.895830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.164 [2024-09-29 21:52:18.895949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:00.164 [2024-09-29 21:52:18.895966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.987 ms 00:21:00.164 [2024-09-29 21:52:18.895973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:18.901101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.164 [2024-09-29 21:52:18.901127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:00.164 [2024-09-29 21:52:18.901138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.098 ms 00:21:00.164 [2024-09-29 21:52:18.901145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:18.919484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.164 [2024-09-29 21:52:18.919522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:00.164 [2024-09-29 21:52:18.919534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.286 ms 00:21:00.164 [2024-09-29 21:52:18.919541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:18.931760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.164 [2024-09-29 21:52:18.931796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:00.164 [2024-09-29 21:52:18.931809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.178 ms 00:21:00.164 [2024-09-29 21:52:18.931816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:18.931942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.164 [2024-09-29 21:52:18.931951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:00.164 [2024-09-29 21:52:18.931962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:21:00.164 [2024-09-29 21:52:18.931968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:18.950213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.164 [2024-09-29 21:52:18.950255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:00.164 [2024-09-29 21:52:18.950266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.228 ms 00:21:00.164 [2024-09-29 21:52:18.950272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:18.967908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.164 [2024-09-29 21:52:18.967942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:00.164 [2024-09-29 21:52:18.967953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.596 ms 00:21:00.164 [2024-09-29 21:52:18.967959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:18.986168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.164 [2024-09-29 21:52:18.986409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.164 [2024-09-29 21:52:18.986428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.171 ms 00:21:00.164 [2024-09-29 21:52:18.986434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:19.003660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.164 [2024-09-29 21:52:19.003693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.164 [2024-09-29 21:52:19.003704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.145 ms 00:21:00.164 [2024-09-29 21:52:19.003710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.164 [2024-09-29 21:52:19.003745] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.164 [2024-09-29 21:52:19.003757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.164 [2024-09-29 21:52:19.003972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.003979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.003984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.003992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.003997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.165 [2024-09-29 21:52:19.004444] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.166 [2024-09-29 21:52:19.004454] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3d6e5af9-4fd9-4dfc-bde3-3b26bab28ab1 00:21:00.166 [2024-09-29 21:52:19.004459] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:00.166 [2024-09-29 21:52:19.004468] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:00.166 [2024-09-29 21:52:19.004473] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:00.166 [2024-09-29 21:52:19.004480] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:00.166 [2024-09-29 21:52:19.004486] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.166 [2024-09-29 21:52:19.004493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.166 [2024-09-29 21:52:19.004498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.166 [2024-09-29 21:52:19.004504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.166 [2024-09-29 21:52:19.004509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.166 [2024-09-29 21:52:19.004516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.166 [2024-09-29 21:52:19.004521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.166 [2024-09-29 21:52:19.004530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:21:00.166 [2024-09-29 21:52:19.004535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.166 [2024-09-29 21:52:19.014073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.166 [2024-09-29 21:52:19.014183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.166 [2024-09-29 21:52:19.014198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.506 ms 00:21:00.166 [2024-09-29 21:52:19.014204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.166 [2024-09-29 21:52:19.014507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.166 [2024-09-29 21:52:19.014516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.166 [2024-09-29 21:52:19.014524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:21:00.166 [2024-09-29 21:52:19.014531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.166 [2024-09-29 21:52:19.043241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.166 [2024-09-29 21:52:19.043282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.166 [2024-09-29 21:52:19.043292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.166 [2024-09-29 21:52:19.043298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.166 [2024-09-29 21:52:19.043357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.166 [2024-09-29 21:52:19.043363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.166 [2024-09-29 21:52:19.043370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.166 [2024-09-29 21:52:19.043378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.166 [2024-09-29 21:52:19.043484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.166 [2024-09-29 21:52:19.043492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.166 [2024-09-29 21:52:19.043500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.166 [2024-09-29 21:52:19.043506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.166 [2024-09-29 21:52:19.043523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.166 [2024-09-29 21:52:19.043529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.166 [2024-09-29 21:52:19.043536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.166 [2024-09-29 21:52:19.043541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.166 [2024-09-29 21:52:19.102997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.166 [2024-09-29 21:52:19.103041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.166 [2024-09-29 21:52:19.103053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.166 [2024-09-29 21:52:19.103059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.431 [2024-09-29 21:52:19.152057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.431 [2024-09-29 21:52:19.152103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.431 [2024-09-29 21:52:19.152114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.431 [2024-09-29 21:52:19.152122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.431 [2024-09-29 21:52:19.152187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.431 [2024-09-29 21:52:19.152194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.431 [2024-09-29 21:52:19.152202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.431 [2024-09-29 21:52:19.152208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.431 [2024-09-29 21:52:19.152260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.431 [2024-09-29 21:52:19.152268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.431 [2024-09-29 21:52:19.152275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.431 [2024-09-29 21:52:19.152281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.431 [2024-09-29 21:52:19.152357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.431 [2024-09-29 21:52:19.152365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.431 [2024-09-29 21:52:19.152373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.432 [2024-09-29 21:52:19.152378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.432 [2024-09-29 21:52:19.152427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.432 [2024-09-29 21:52:19.152435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.432 [2024-09-29 21:52:19.152442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.432 [2024-09-29 21:52:19.152461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.432 [2024-09-29 21:52:19.152494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.432 [2024-09-29 21:52:19.152501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.432 [2024-09-29 21:52:19.152509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.432 [2024-09-29 21:52:19.152514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.432 [2024-09-29 21:52:19.152552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.432 [2024-09-29 21:52:19.152559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.432 [2024-09-29 21:52:19.152567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.432 [2024-09-29 21:52:19.152572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.432 [2024-09-29 21:52:19.152674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 274.189 ms, result 0 00:21:00.432 true 00:21:00.432 21:52:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76077 00:21:00.432 21:52:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76077 00:21:00.432 21:52:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:21:00.432 [2024-09-29 21:52:19.241150] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:00.432 [2024-09-29 21:52:19.241270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76663 ] 00:21:00.432 [2024-09-29 21:52:19.391245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.690 [2024-09-29 21:52:19.575212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.003  Copying: 195/1024 [MB] (195 MBps) Copying: 418/1024 [MB] (223 MBps) Copying: 681/1024 [MB] (262 MBps) Copying: 937/1024 [MB] (256 MBps) Copying: 1024/1024 [MB] (average 235 MBps) 00:21:06.003 00:21:06.003 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76077 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:21:06.003 21:52:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:06.003 [2024-09-29 21:52:24.864969] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:06.003 [2024-09-29 21:52:24.865092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76721 ] 00:21:06.262 [2024-09-29 21:52:25.012559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.262 [2024-09-29 21:52:25.165355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.521 [2024-09-29 21:52:25.373684] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:06.521 [2024-09-29 21:52:25.373743] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:06.521 [2024-09-29 21:52:25.436303] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:06.521 [2024-09-29 21:52:25.436621] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:06.521 [2024-09-29 21:52:25.436878] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:06.780 [2024-09-29 21:52:25.620401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.780 [2024-09-29 21:52:25.620451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:06.780 [2024-09-29 21:52:25.620461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:06.780 [2024-09-29 21:52:25.620468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.780 [2024-09-29 21:52:25.620507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.780 [2024-09-29 21:52:25.620516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:06.780 [2024-09-29 21:52:25.620522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:06.780 [2024-09-29 21:52:25.620530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.780 [2024-09-29 21:52:25.620543] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:06.780 [2024-09-29 21:52:25.621088] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:06.780 [2024-09-29 21:52:25.621102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.780 [2024-09-29 21:52:25.621108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:06.780 [2024-09-29 21:52:25.621115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:21:06.780 [2024-09-29 21:52:25.621120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.780 [2024-09-29 21:52:25.622143] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:06.780 [2024-09-29 21:52:25.631763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.780 [2024-09-29 21:52:25.631794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:06.780 [2024-09-29 21:52:25.631803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.622 ms 00:21:06.780 [2024-09-29 21:52:25.631809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.780 [2024-09-29 21:52:25.631854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.780 [2024-09-29 21:52:25.631864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:06.780 [2024-09-29 21:52:25.631871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:06.780 [2024-09-29 21:52:25.631877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.780 [2024-09-29 21:52:25.636436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.780 [2024-09-29 21:52:25.636463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:06.781 [2024-09-29 21:52:25.636471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.521 ms 00:21:06.781 [2024-09-29 21:52:25.636477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.781 [2024-09-29 21:52:25.636534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.781 [2024-09-29 21:52:25.636541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:06.781 [2024-09-29 21:52:25.636547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:06.781 [2024-09-29 21:52:25.636553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.781 [2024-09-29 21:52:25.636596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.781 [2024-09-29 21:52:25.636604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:06.781 [2024-09-29 21:52:25.636610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:06.781 [2024-09-29 21:52:25.636616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.781 [2024-09-29 21:52:25.636631] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:06.781 [2024-09-29 21:52:25.639352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.781 [2024-09-29 21:52:25.639495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:06.781 [2024-09-29 21:52:25.639508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.726 ms 00:21:06.781 [2024-09-29 21:52:25.639514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.781 [2024-09-29 21:52:25.639544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.781 [2024-09-29 21:52:25.639551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:06.781 [2024-09-29 21:52:25.639558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:06.781 [2024-09-29 21:52:25.639563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.781 [2024-09-29 21:52:25.639579] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:06.781 [2024-09-29 21:52:25.639594] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:06.781 [2024-09-29 21:52:25.639621] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:06.781 [2024-09-29 21:52:25.639634] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:06.781 [2024-09-29 21:52:25.639713] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:06.781 [2024-09-29 21:52:25.639721] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:06.781 [2024-09-29 21:52:25.639729] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:06.781 [2024-09-29 21:52:25.639738] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:06.781 [2024-09-29 21:52:25.639745] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:06.781 [2024-09-29 21:52:25.639751] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:06.781 [2024-09-29 21:52:25.639757] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:06.781 [2024-09-29 21:52:25.639763] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:06.781 [2024-09-29 21:52:25.639768] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:06.781 [2024-09-29 21:52:25.639774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.781 [2024-09-29 21:52:25.639782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:06.781 [2024-09-29 21:52:25.639788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:21:06.781 [2024-09-29 21:52:25.639793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.781 [2024-09-29 21:52:25.639855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.781 [2024-09-29 21:52:25.639862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:06.781 [2024-09-29 21:52:25.639867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:06.781 [2024-09-29 21:52:25.639873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.781 [2024-09-29 21:52:25.639949] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:06.781 [2024-09-29 21:52:25.639957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:06.781 [2024-09-29 21:52:25.639965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:06.781 [2024-09-29 21:52:25.639971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.781 [2024-09-29 21:52:25.639977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:06.781 [2024-09-29 21:52:25.639982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:06.781 [2024-09-29 21:52:25.639987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:06.781 [2024-09-29 21:52:25.639992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:06.781 [2024-09-29 21:52:25.639998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:06.781 [2024-09-29 21:52:25.640015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:06.781 [2024-09-29 21:52:25.640021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:06.781 [2024-09-29 21:52:25.640026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:06.781 [2024-09-29 21:52:25.640031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:06.781 [2024-09-29 21:52:25.640036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:06.781 [2024-09-29 21:52:25.640042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:06.781 [2024-09-29 21:52:25.640052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:06.781 [2024-09-29 21:52:25.640057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:06.781 [2024-09-29 21:52:25.640067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.781 [2024-09-29 21:52:25.640077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:06.781 [2024-09-29 21:52:25.640083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.781 [2024-09-29 21:52:25.640093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:06.781 [2024-09-29 21:52:25.640098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.781 [2024-09-29 21:52:25.640108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:06.781 [2024-09-29 21:52:25.640113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.781 [2024-09-29 21:52:25.640123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:06.781 [2024-09-29 21:52:25.640128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:06.781 [2024-09-29 21:52:25.640138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:06.781 [2024-09-29 21:52:25.640143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:06.781 [2024-09-29 21:52:25.640148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:06.781 [2024-09-29 21:52:25.640153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:06.781 [2024-09-29 21:52:25.640158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:06.781 [2024-09-29 21:52:25.640163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:06.781 [2024-09-29 21:52:25.640173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:06.781 [2024-09-29 21:52:25.640178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640184] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:06.781 [2024-09-29 21:52:25.640190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:06.781 [2024-09-29 21:52:25.640195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:06.781 [2024-09-29 21:52:25.640200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.781 [2024-09-29 21:52:25.640206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:06.781 [2024-09-29 21:52:25.640212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:06.781 [2024-09-29 21:52:25.640217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:06.781 [2024-09-29 21:52:25.640222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:06.781 [2024-09-29 21:52:25.640226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:06.781 [2024-09-29 21:52:25.640231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:06.781 [2024-09-29 21:52:25.640237] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:06.781 [2024-09-29 21:52:25.640243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:06.781 [2024-09-29 21:52:25.640250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:06.781 [2024-09-29 21:52:25.640256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:06.781 [2024-09-29 21:52:25.640261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:06.781 [2024-09-29 21:52:25.640267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:06.781 [2024-09-29 21:52:25.640272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:06.781 [2024-09-29 21:52:25.640277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:06.782 [2024-09-29 21:52:25.640282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:06.782 [2024-09-29 21:52:25.640288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:06.782 [2024-09-29 21:52:25.640293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:06.782 [2024-09-29 21:52:25.640298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:06.782 [2024-09-29 21:52:25.640303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:06.782 [2024-09-29 21:52:25.640309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:06.782 [2024-09-29 21:52:25.640314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:06.782 [2024-09-29 21:52:25.640319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:06.782 [2024-09-29 21:52:25.640324] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:06.782 [2024-09-29 21:52:25.640330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:06.782 [2024-09-29 21:52:25.640338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:06.782 [2024-09-29 21:52:25.640344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:06.782 [2024-09-29 21:52:25.640349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:06.782 [2024-09-29 21:52:25.640355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:06.782 [2024-09-29 21:52:25.640361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.640366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:06.782 [2024-09-29 21:52:25.640372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:21:06.782 [2024-09-29 21:52:25.640377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.679339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.679402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.782 [2024-09-29 21:52:25.679413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.916 ms 00:21:06.782 [2024-09-29 21:52:25.679420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.679506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.679513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:06.782 [2024-09-29 21:52:25.679520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:06.782 [2024-09-29 21:52:25.679526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.703540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.703578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.782 [2024-09-29 21:52:25.703587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.946 ms 00:21:06.782 [2024-09-29 21:52:25.703593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.703635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.703642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.782 [2024-09-29 21:52:25.703648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:06.782 [2024-09-29 21:52:25.703654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.703978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.703991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.782 [2024-09-29 21:52:25.703999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:21:06.782 [2024-09-29 21:52:25.704005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.704106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.704112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.782 [2024-09-29 21:52:25.704119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:06.782 [2024-09-29 21:52:25.704125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.714076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.714257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.782 [2024-09-29 21:52:25.714271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.934 ms 00:21:06.782 [2024-09-29 21:52:25.714277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.724246] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:06.782 [2024-09-29 21:52:25.724410] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:06.782 [2024-09-29 21:52:25.724465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.724482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:06.782 [2024-09-29 21:52:25.724498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.062 ms 00:21:06.782 [2024-09-29 21:52:25.724513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.743419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.743588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:06.782 [2024-09-29 21:52:25.743640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.858 ms 00:21:06.782 [2024-09-29 21:52:25.743657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.753006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.753142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:06.782 [2024-09-29 21:52:25.753187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.286 ms 00:21:06.782 [2024-09-29 21:52:25.753204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.762080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.762216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:06.782 [2024-09-29 21:52:25.762264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.838 ms 00:21:06.782 [2024-09-29 21:52:25.762281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.782 [2024-09-29 21:52:25.762781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.782 [2024-09-29 21:52:25.762851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:06.782 [2024-09-29 21:52:25.763143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:21:06.782 [2024-09-29 21:52:25.763177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.041 [2024-09-29 21:52:25.808126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.041 [2024-09-29 21:52:25.808325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:07.041 [2024-09-29 21:52:25.808374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.873 ms 00:21:07.041 [2024-09-29 21:52:25.808494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.041 [2024-09-29 21:52:25.816690] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:07.041 [2024-09-29 21:52:25.819138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.041 [2024-09-29 21:52:25.819238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:07.041 [2024-09-29 21:52:25.819279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.537 ms 00:21:07.041 [2024-09-29 21:52:25.819298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.041 [2024-09-29 21:52:25.819419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.041 [2024-09-29 21:52:25.819463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:07.041 [2024-09-29 21:52:25.819495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:07.041 [2024-09-29 21:52:25.819513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.041 [2024-09-29 21:52:25.819598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.041 [2024-09-29 21:52:25.819624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:07.041 [2024-09-29 21:52:25.819670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:07.041 [2024-09-29 21:52:25.819688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.041 [2024-09-29 21:52:25.819717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.041 [2024-09-29 21:52:25.819735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:07.041 [2024-09-29 21:52:25.819773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:07.041 [2024-09-29 21:52:25.819788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.041 [2024-09-29 21:52:25.819827] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:07.041 [2024-09-29 21:52:25.819879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.041 [2024-09-29 21:52:25.819899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:07.041 [2024-09-29 21:52:25.819915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:07.041 [2024-09-29 21:52:25.819930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.041 [2024-09-29 21:52:25.838461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.041 [2024-09-29 21:52:25.838622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:07.041 [2024-09-29 21:52:25.838667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.505 ms 00:21:07.041 [2024-09-29 21:52:25.838686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.041 [2024-09-29 21:52:25.838765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.041 [2024-09-29 21:52:25.838804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:07.041 [2024-09-29 21:52:25.838823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:07.041 [2024-09-29 21:52:25.838837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.041 [2024-09-29 21:52:25.839738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 218.988 ms, result 0 00:21:31.062  Copying: 44/1024 [MB] (44 MBps) Copying: 88/1024 [MB] (43 MBps) Copying: 135/1024 [MB] (47 MBps) Copying: 179/1024 [MB] (43 MBps) Copying: 223/1024 [MB] (44 MBps) Copying: 265/1024 [MB] (42 MBps) Copying: 307/1024 [MB] (41 MBps) Copying: 351/1024 [MB] (43 MBps) Copying: 393/1024 [MB] (41 MBps) Copying: 439/1024 [MB] (46 MBps) Copying: 485/1024 [MB] (46 MBps) Copying: 531/1024 [MB] (45 MBps) Copying: 579/1024 [MB] (48 MBps) Copying: 624/1024 [MB] (44 MBps) Copying: 667/1024 [MB] (43 MBps) Copying: 712/1024 [MB] (44 MBps) Copying: 756/1024 [MB] (43 MBps) Copying: 799/1024 [MB] (43 MBps) Copying: 844/1024 [MB] (44 MBps) Copying: 892/1024 [MB] (47 MBps) Copying: 936/1024 [MB] (44 MBps) Copying: 980/1024 [MB] (43 MBps) Copying: 1022/1024 [MB] (41 MBps) Copying: 1024/1024 [MB] (average 42 MBps)[2024-09-29 21:52:49.769242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.062 [2024-09-29 21:52:49.769307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:31.062 [2024-09-29 21:52:49.769322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:31.062 [2024-09-29 21:52:49.769330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.062 [2024-09-29 21:52:49.771049] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.062 [2024-09-29 21:52:49.775779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.062 [2024-09-29 21:52:49.775883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:31.062 [2024-09-29 21:52:49.775935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.621 ms 00:21:31.062 [2024-09-29 21:52:49.775954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.062 [2024-09-29 21:52:49.785006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.062 [2024-09-29 21:52:49.785100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:31.062 [2024-09-29 21:52:49.785193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.843 ms 00:21:31.062 [2024-09-29 21:52:49.785211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.062 [2024-09-29 21:52:49.801091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.062 [2024-09-29 21:52:49.801193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:31.062 [2024-09-29 21:52:49.801268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.856 ms 00:21:31.062 [2024-09-29 21:52:49.801286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.063 [2024-09-29 21:52:49.805997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.063 [2024-09-29 21:52:49.806083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:31.063 [2024-09-29 21:52:49.806132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.681 ms 00:21:31.063 [2024-09-29 21:52:49.806149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.063 [2024-09-29 21:52:49.825046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.063 [2024-09-29 21:52:49.825147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:31.063 [2024-09-29 21:52:49.825191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.846 ms 00:21:31.063 [2024-09-29 21:52:49.825209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.063 [2024-09-29 21:52:49.837019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.063 [2024-09-29 21:52:49.837112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:31.063 [2024-09-29 21:52:49.837177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.778 ms 00:21:31.063 [2024-09-29 21:52:49.837195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.063 [2024-09-29 21:52:49.888518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.063 [2024-09-29 21:52:49.888627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:31.063 [2024-09-29 21:52:49.888665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.288 ms 00:21:31.063 [2024-09-29 21:52:49.888684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.063 [2024-09-29 21:52:49.906869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.063 [2024-09-29 21:52:49.906964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:31.063 [2024-09-29 21:52:49.906977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.162 ms 00:21:31.063 [2024-09-29 21:52:49.906984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.063 [2024-09-29 21:52:49.924308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.063 [2024-09-29 21:52:49.924342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:31.063 [2024-09-29 21:52:49.924351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.302 ms 00:21:31.063 [2024-09-29 21:52:49.924356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.063 [2024-09-29 21:52:49.941477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.063 [2024-09-29 21:52:49.941504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:31.063 [2024-09-29 21:52:49.941513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.095 ms 00:21:31.063 [2024-09-29 21:52:49.941519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.063 [2024-09-29 21:52:49.958275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.063 [2024-09-29 21:52:49.958300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:31.063 [2024-09-29 21:52:49.958309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.698 ms 00:21:31.063 [2024-09-29 21:52:49.958315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.063 [2024-09-29 21:52:49.958340] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:31.063 [2024-09-29 21:52:49.958353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 125184 / 261120 wr_cnt: 1 state: open 00:21:31.063 [2024-09-29 21:52:49.958362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:31.063 [2024-09-29 21:52:49.958869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.958999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:31.064 [2024-09-29 21:52:49.959124] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:31.064 [2024-09-29 21:52:49.959131] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3d6e5af9-4fd9-4dfc-bde3-3b26bab28ab1 00:21:31.064 [2024-09-29 21:52:49.959137] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 125184 00:21:31.064 [2024-09-29 21:52:49.959143] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126144 00:21:31.064 [2024-09-29 21:52:49.959149] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 125184 00:21:31.064 [2024-09-29 21:52:49.959155] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0077 00:21:31.064 [2024-09-29 21:52:49.959160] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:31.064 [2024-09-29 21:52:49.959166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:31.064 [2024-09-29 21:52:49.959177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:31.064 [2024-09-29 21:52:49.959182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:31.064 [2024-09-29 21:52:49.959187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:31.064 [2024-09-29 21:52:49.959193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.064 [2024-09-29 21:52:49.959202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:31.064 [2024-09-29 21:52:49.959210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:21:31.064 [2024-09-29 21:52:49.959215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.064 [2024-09-29 21:52:49.969120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.064 [2024-09-29 21:52:49.969146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:31.064 [2024-09-29 21:52:49.969155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.892 ms 00:21:31.064 [2024-09-29 21:52:49.969161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.064 [2024-09-29 21:52:49.969470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.064 [2024-09-29 21:52:49.969481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:31.064 [2024-09-29 21:52:49.969487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:21:31.064 [2024-09-29 21:52:49.969494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.064 [2024-09-29 21:52:49.992814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.064 [2024-09-29 21:52:49.992847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.064 [2024-09-29 21:52:49.992856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.064 [2024-09-29 21:52:49.992867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.064 [2024-09-29 21:52:49.992918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.064 [2024-09-29 21:52:49.992925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.064 [2024-09-29 21:52:49.992931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.064 [2024-09-29 21:52:49.992938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.064 [2024-09-29 21:52:49.992989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.064 [2024-09-29 21:52:49.993001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.064 [2024-09-29 21:52:49.993008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.064 [2024-09-29 21:52:49.993015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.064 [2024-09-29 21:52:49.993030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.064 [2024-09-29 21:52:49.993037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.064 [2024-09-29 21:52:49.993043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.064 [2024-09-29 21:52:49.993049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.323 [2024-09-29 21:52:50.056518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.323 [2024-09-29 21:52:50.056564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.323 [2024-09-29 21:52:50.056575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.323 [2024-09-29 21:52:50.056582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.323 [2024-09-29 21:52:50.107522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.323 [2024-09-29 21:52:50.107571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.323 [2024-09-29 21:52:50.107581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.323 [2024-09-29 21:52:50.107588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.323 [2024-09-29 21:52:50.107661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.323 [2024-09-29 21:52:50.107670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.323 [2024-09-29 21:52:50.107677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.323 [2024-09-29 21:52:50.107682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.323 [2024-09-29 21:52:50.107713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.323 [2024-09-29 21:52:50.107725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.323 [2024-09-29 21:52:50.107732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.323 [2024-09-29 21:52:50.107738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.323 [2024-09-29 21:52:50.107815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.323 [2024-09-29 21:52:50.107823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.323 [2024-09-29 21:52:50.107831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.323 [2024-09-29 21:52:50.107838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.323 [2024-09-29 21:52:50.107863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.323 [2024-09-29 21:52:50.107871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:31.323 [2024-09-29 21:52:50.107880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.323 [2024-09-29 21:52:50.107886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.323 [2024-09-29 21:52:50.107917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.323 [2024-09-29 21:52:50.107925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.323 [2024-09-29 21:52:50.107933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.323 [2024-09-29 21:52:50.107939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.323 [2024-09-29 21:52:50.107976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.323 [2024-09-29 21:52:50.107987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.323 [2024-09-29 21:52:50.107994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.323 [2024-09-29 21:52:50.108002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.323 [2024-09-29 21:52:50.108109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.530 ms, result 0 00:21:32.696 00:21:32.696 00:21:32.696 21:52:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:34.595 21:52:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:34.595 [2024-09-29 21:52:53.533590] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:34.595 [2024-09-29 21:52:53.533706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77018 ] 00:21:34.853 [2024-09-29 21:52:53.677352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.111 [2024-09-29 21:52:53.861619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.111 [2024-09-29 21:52:54.091459] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.111 [2024-09-29 21:52:54.091522] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:35.370 [2024-09-29 21:52:54.244879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-09-29 21:52:54.245100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:35.370 [2024-09-29 21:52:54.245118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:35.370 [2024-09-29 21:52:54.245132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-09-29 21:52:54.245176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-09-29 21:52:54.245185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.370 [2024-09-29 21:52:54.245191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:35.370 [2024-09-29 21:52:54.245198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-09-29 21:52:54.245214] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:35.370 [2024-09-29 21:52:54.245751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:35.370 [2024-09-29 21:52:54.245765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-09-29 21:52:54.245771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.370 [2024-09-29 21:52:54.245779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:21:35.370 [2024-09-29 21:52:54.245785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-09-29 21:52:54.247093] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:35.370 [2024-09-29 21:52:54.257470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-09-29 21:52:54.257502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:35.370 [2024-09-29 21:52:54.257513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.378 ms 00:21:35.370 [2024-09-29 21:52:54.257519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-09-29 21:52:54.257569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-09-29 21:52:54.257578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:35.370 [2024-09-29 21:52:54.257585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:35.370 [2024-09-29 21:52:54.257591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-09-29 21:52:54.263832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-09-29 21:52:54.263987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.370 [2024-09-29 21:52:54.264006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.193 ms 00:21:35.370 [2024-09-29 21:52:54.264014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-09-29 21:52:54.264084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-09-29 21:52:54.264092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.370 [2024-09-29 21:52:54.264103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:35.370 [2024-09-29 21:52:54.264110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.370 [2024-09-29 21:52:54.264161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.370 [2024-09-29 21:52:54.264169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:35.371 [2024-09-29 21:52:54.264177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:35.371 [2024-09-29 21:52:54.264183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-09-29 21:52:54.264206] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:35.371 [2024-09-29 21:52:54.267184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-09-29 21:52:54.267291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.371 [2024-09-29 21:52:54.267304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.984 ms 00:21:35.371 [2024-09-29 21:52:54.267310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-09-29 21:52:54.267337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-09-29 21:52:54.267345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:35.371 [2024-09-29 21:52:54.267352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:35.371 [2024-09-29 21:52:54.267358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-09-29 21:52:54.267378] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:35.371 [2024-09-29 21:52:54.267406] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:35.371 [2024-09-29 21:52:54.267438] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:35.371 [2024-09-29 21:52:54.267451] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:35.371 [2024-09-29 21:52:54.267534] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:35.371 [2024-09-29 21:52:54.267544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:35.371 [2024-09-29 21:52:54.267553] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:35.371 [2024-09-29 21:52:54.267564] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:35.371 [2024-09-29 21:52:54.267572] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:35.371 [2024-09-29 21:52:54.267579] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:35.371 [2024-09-29 21:52:54.267586] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:35.371 [2024-09-29 21:52:54.267592] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:35.371 [2024-09-29 21:52:54.267598] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:35.371 [2024-09-29 21:52:54.267605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-09-29 21:52:54.267611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:35.371 [2024-09-29 21:52:54.267618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:21:35.371 [2024-09-29 21:52:54.267623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-09-29 21:52:54.267687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.371 [2024-09-29 21:52:54.267696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:35.371 [2024-09-29 21:52:54.267702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:35.371 [2024-09-29 21:52:54.267707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.371 [2024-09-29 21:52:54.267791] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:35.371 [2024-09-29 21:52:54.267801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:35.371 [2024-09-29 21:52:54.267808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.371 [2024-09-29 21:52:54.267816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:35.371 [2024-09-29 21:52:54.267827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:35.371 [2024-09-29 21:52:54.267838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:35.371 [2024-09-29 21:52:54.267843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.371 [2024-09-29 21:52:54.267855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:35.371 [2024-09-29 21:52:54.267860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:35.371 [2024-09-29 21:52:54.267866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:35.371 [2024-09-29 21:52:54.267877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:35.371 [2024-09-29 21:52:54.267882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:35.371 [2024-09-29 21:52:54.267887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:35.371 [2024-09-29 21:52:54.267899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:35.371 [2024-09-29 21:52:54.267905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:35.371 [2024-09-29 21:52:54.267915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.371 [2024-09-29 21:52:54.267925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:35.371 [2024-09-29 21:52:54.267931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.371 [2024-09-29 21:52:54.267941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:35.371 [2024-09-29 21:52:54.267947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.371 [2024-09-29 21:52:54.267957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:35.371 [2024-09-29 21:52:54.267962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:35.371 [2024-09-29 21:52:54.267972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:35.371 [2024-09-29 21:52:54.267977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:35.371 [2024-09-29 21:52:54.267982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.371 [2024-09-29 21:52:54.267987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:35.371 [2024-09-29 21:52:54.267993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:35.371 [2024-09-29 21:52:54.267998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:35.371 [2024-09-29 21:52:54.268004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:35.371 [2024-09-29 21:52:54.268009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:35.371 [2024-09-29 21:52:54.268014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.371 [2024-09-29 21:52:54.268019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:35.371 [2024-09-29 21:52:54.268025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:35.371 [2024-09-29 21:52:54.268031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.371 [2024-09-29 21:52:54.268036] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:35.371 [2024-09-29 21:52:54.268042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:35.371 [2024-09-29 21:52:54.268049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:35.371 [2024-09-29 21:52:54.268055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:35.371 [2024-09-29 21:52:54.268060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:35.371 [2024-09-29 21:52:54.268067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:35.371 [2024-09-29 21:52:54.268072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:35.371 [2024-09-29 21:52:54.268077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:35.371 [2024-09-29 21:52:54.268082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:35.371 [2024-09-29 21:52:54.268087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:35.371 [2024-09-29 21:52:54.268094] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:35.371 [2024-09-29 21:52:54.268101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.371 [2024-09-29 21:52:54.268107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:35.371 [2024-09-29 21:52:54.268113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:35.371 [2024-09-29 21:52:54.268119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:35.371 [2024-09-29 21:52:54.268124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:35.371 [2024-09-29 21:52:54.268129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:35.371 [2024-09-29 21:52:54.268135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:35.371 [2024-09-29 21:52:54.268140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:35.371 [2024-09-29 21:52:54.268146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:35.371 [2024-09-29 21:52:54.268151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:35.371 [2024-09-29 21:52:54.268157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:35.372 [2024-09-29 21:52:54.268163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:35.372 [2024-09-29 21:52:54.268168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:35.372 [2024-09-29 21:52:54.268174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:35.372 [2024-09-29 21:52:54.268182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:35.372 [2024-09-29 21:52:54.268187] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:35.372 [2024-09-29 21:52:54.268194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:35.372 [2024-09-29 21:52:54.268201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:35.372 [2024-09-29 21:52:54.268207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:35.372 [2024-09-29 21:52:54.268212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:35.372 [2024-09-29 21:52:54.268217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:35.372 [2024-09-29 21:52:54.268223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-09-29 21:52:54.268229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:35.372 [2024-09-29 21:52:54.268235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:21:35.372 [2024-09-29 21:52:54.268241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-09-29 21:52:54.302322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-09-29 21:52:54.302376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.372 [2024-09-29 21:52:54.302423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.032 ms 00:21:35.372 [2024-09-29 21:52:54.302435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-09-29 21:52:54.302554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-09-29 21:52:54.302566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:35.372 [2024-09-29 21:52:54.302576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:35.372 [2024-09-29 21:52:54.302586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-09-29 21:52:54.329298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-09-29 21:52:54.329349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.372 [2024-09-29 21:52:54.329361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.636 ms 00:21:35.372 [2024-09-29 21:52:54.329368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-09-29 21:52:54.329413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-09-29 21:52:54.329421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:35.372 [2024-09-29 21:52:54.329429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:35.372 [2024-09-29 21:52:54.329435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-09-29 21:52:54.329845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-09-29 21:52:54.329858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:35.372 [2024-09-29 21:52:54.329868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:21:35.372 [2024-09-29 21:52:54.329878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-09-29 21:52:54.329988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-09-29 21:52:54.330002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:35.372 [2024-09-29 21:52:54.330009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:35.372 [2024-09-29 21:52:54.330015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-09-29 21:52:54.341096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-09-29 21:52:54.341123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:35.372 [2024-09-29 21:52:54.341132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.064 ms 00:21:35.372 [2024-09-29 21:52:54.341138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.372 [2024-09-29 21:52:54.351349] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:35.372 [2024-09-29 21:52:54.351379] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:35.372 [2024-09-29 21:52:54.351404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.372 [2024-09-29 21:52:54.351412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:35.372 [2024-09-29 21:52:54.351431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.165 ms 00:21:35.372 [2024-09-29 21:52:54.351438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.370694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.370861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:35.641 [2024-09-29 21:52:54.370876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.222 ms 00:21:35.641 [2024-09-29 21:52:54.370884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.379896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.379924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:35.641 [2024-09-29 21:52:54.379932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.973 ms 00:21:35.641 [2024-09-29 21:52:54.379938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.388645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.388671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:35.641 [2024-09-29 21:52:54.388680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.678 ms 00:21:35.641 [2024-09-29 21:52:54.388686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.389150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.389166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.641 [2024-09-29 21:52:54.389174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:21:35.641 [2024-09-29 21:52:54.389181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.437831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.438041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:35.641 [2024-09-29 21:52:54.438059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.635 ms 00:21:35.641 [2024-09-29 21:52:54.438066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.446445] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:35.641 [2024-09-29 21:52:54.448996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.449083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.641 [2024-09-29 21:52:54.449490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.896 ms 00:21:35.641 [2024-09-29 21:52:54.449760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.450144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.450375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:35.641 [2024-09-29 21:52:54.450575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:35.641 [2024-09-29 21:52:54.450725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.454464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.454585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.641 [2024-09-29 21:52:54.454647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.509 ms 00:21:35.641 [2024-09-29 21:52:54.454675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.454737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.454802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.641 [2024-09-29 21:52:54.454830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:35.641 [2024-09-29 21:52:54.454894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.454973] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:35.641 [2024-09-29 21:52:54.455126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.455157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:35.641 [2024-09-29 21:52:54.455191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:21:35.641 [2024-09-29 21:52:54.455215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.481662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.481829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.641 [2024-09-29 21:52:54.481886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.360 ms 00:21:35.641 [2024-09-29 21:52:54.481911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.481999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.641 [2024-09-29 21:52:54.482025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.641 [2024-09-29 21:52:54.482046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:35.641 [2024-09-29 21:52:54.482064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.641 [2024-09-29 21:52:54.483471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 238.084 ms, result 0 00:21:58.185  Copying: 984/1048576 [kB] (984 kBps) Copying: 5396/1048576 [kB] (4412 kBps) Copying: 52/1024 [MB] (47 MBps) Copying: 106/1024 [MB] (53 MBps) Copying: 162/1024 [MB] (56 MBps) Copying: 215/1024 [MB] (52 MBps) Copying: 267/1024 [MB] (52 MBps) Copying: 317/1024 [MB] (50 MBps) Copying: 369/1024 [MB] (51 MBps) Copying: 424/1024 [MB] (54 MBps) Copying: 476/1024 [MB] (52 MBps) Copying: 527/1024 [MB] (51 MBps) Copying: 576/1024 [MB] (49 MBps) Copying: 626/1024 [MB] (49 MBps) Copying: 681/1024 [MB] (54 MBps) Copying: 734/1024 [MB] (53 MBps) Copying: 786/1024 [MB] (52 MBps) Copying: 838/1024 [MB] (52 MBps) Copying: 895/1024 [MB] (57 MBps) Copying: 949/1024 [MB] (53 MBps) Copying: 1007/1024 [MB] (57 MBps) Copying: 1024/1024 [MB] (average 47 MBps)[2024-09-29 21:53:16.962836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.185 [2024-09-29 21:53:16.963133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:58.185 [2024-09-29 21:53:16.963157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:58.185 [2024-09-29 21:53:16.963166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.185 [2024-09-29 21:53:16.963197] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.185 [2024-09-29 21:53:16.966055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.185 [2024-09-29 21:53:16.966095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:58.185 [2024-09-29 21:53:16.966107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:21:58.185 [2024-09-29 21:53:16.966115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.185 [2024-09-29 21:53:16.966373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.185 [2024-09-29 21:53:16.966396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:58.185 [2024-09-29 21:53:16.966406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:21:58.185 [2024-09-29 21:53:16.966414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.185 [2024-09-29 21:53:16.976304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.185 [2024-09-29 21:53:16.976339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:58.185 [2024-09-29 21:53:16.976349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.873 ms 00:21:58.185 [2024-09-29 21:53:16.976362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.185 [2024-09-29 21:53:16.982630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.185 [2024-09-29 21:53:16.982658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:58.185 [2024-09-29 21:53:16.982668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.233 ms 00:21:58.185 [2024-09-29 21:53:16.982677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.185 [2024-09-29 21:53:17.006778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.185 [2024-09-29 21:53:17.006815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:58.185 [2024-09-29 21:53:17.006826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.059 ms 00:21:58.185 [2024-09-29 21:53:17.006834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.185 [2024-09-29 21:53:17.021125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.185 [2024-09-29 21:53:17.021163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:58.185 [2024-09-29 21:53:17.021175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.256 ms 00:21:58.185 [2024-09-29 21:53:17.021184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.185 [2024-09-29 21:53:17.023476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.185 [2024-09-29 21:53:17.023651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:58.186 [2024-09-29 21:53:17.023670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.258 ms 00:21:58.186 [2024-09-29 21:53:17.023678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.186 [2024-09-29 21:53:17.046947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.186 [2024-09-29 21:53:17.047146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:58.186 [2024-09-29 21:53:17.047162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.248 ms 00:21:58.186 [2024-09-29 21:53:17.047172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.186 [2024-09-29 21:53:17.070033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.186 [2024-09-29 21:53:17.070164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:58.186 [2024-09-29 21:53:17.070179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.831 ms 00:21:58.186 [2024-09-29 21:53:17.070186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.186 [2024-09-29 21:53:17.092746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.186 [2024-09-29 21:53:17.092866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:58.186 [2024-09-29 21:53:17.092882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.530 ms 00:21:58.186 [2024-09-29 21:53:17.092889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.186 [2024-09-29 21:53:17.114908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.186 [2024-09-29 21:53:17.114937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:58.186 [2024-09-29 21:53:17.114947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.965 ms 00:21:58.186 [2024-09-29 21:53:17.114954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.186 [2024-09-29 21:53:17.114986] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:58.186 [2024-09-29 21:53:17.115002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:58.186 [2024-09-29 21:53:17.115017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:21:58.186 [2024-09-29 21:53:17.115026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:58.186 [2024-09-29 21:53:17.115606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:58.187 [2024-09-29 21:53:17.115830] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:58.187 [2024-09-29 21:53:17.115838] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3d6e5af9-4fd9-4dfc-bde3-3b26bab28ab1 00:21:58.187 [2024-09-29 21:53:17.115846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:21:58.187 [2024-09-29 21:53:17.115852] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 139456 00:21:58.187 [2024-09-29 21:53:17.115860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 137472 00:21:58.187 [2024-09-29 21:53:17.115869] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0144 00:21:58.187 [2024-09-29 21:53:17.115876] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:58.187 [2024-09-29 21:53:17.115883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:58.187 [2024-09-29 21:53:17.115891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:58.187 [2024-09-29 21:53:17.115897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:58.187 [2024-09-29 21:53:17.115904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:58.187 [2024-09-29 21:53:17.115911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.187 [2024-09-29 21:53:17.115918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:58.187 [2024-09-29 21:53:17.115933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:21:58.187 [2024-09-29 21:53:17.115943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.187 [2024-09-29 21:53:17.128965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.187 [2024-09-29 21:53:17.129069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:58.187 [2024-09-29 21:53:17.129116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.006 ms 00:21:58.187 [2024-09-29 21:53:17.129138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.187 [2024-09-29 21:53:17.129524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.187 [2024-09-29 21:53:17.129555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:58.187 [2024-09-29 21:53:17.129613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:21:58.187 [2024-09-29 21:53:17.129635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.187 [2024-09-29 21:53:17.159370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.187 [2024-09-29 21:53:17.159509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.187 [2024-09-29 21:53:17.159558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.187 [2024-09-29 21:53:17.159580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.187 [2024-09-29 21:53:17.159657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.187 [2024-09-29 21:53:17.159680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.187 [2024-09-29 21:53:17.159700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.187 [2024-09-29 21:53:17.159720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.187 [2024-09-29 21:53:17.159793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.187 [2024-09-29 21:53:17.159818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.187 [2024-09-29 21:53:17.159839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.187 [2024-09-29 21:53:17.159906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.187 [2024-09-29 21:53:17.159938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.187 [2024-09-29 21:53:17.159959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.187 [2024-09-29 21:53:17.159983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.187 [2024-09-29 21:53:17.160001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.446 [2024-09-29 21:53:17.240036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.446 [2024-09-29 21:53:17.240240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.446 [2024-09-29 21:53:17.240292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.446 [2024-09-29 21:53:17.240313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.446 [2024-09-29 21:53:17.309979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.446 [2024-09-29 21:53:17.310189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.446 [2024-09-29 21:53:17.310353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.446 [2024-09-29 21:53:17.310405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.446 [2024-09-29 21:53:17.310511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.446 [2024-09-29 21:53:17.310535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.446 [2024-09-29 21:53:17.310555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.446 [2024-09-29 21:53:17.310619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.446 [2024-09-29 21:53:17.310672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.446 [2024-09-29 21:53:17.310695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.446 [2024-09-29 21:53:17.310715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.446 [2024-09-29 21:53:17.310738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.446 [2024-09-29 21:53:17.310852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.446 [2024-09-29 21:53:17.310939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.446 [2024-09-29 21:53:17.310959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.446 [2024-09-29 21:53:17.310977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.446 [2024-09-29 21:53:17.311019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.446 [2024-09-29 21:53:17.311042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:58.446 [2024-09-29 21:53:17.311096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.446 [2024-09-29 21:53:17.311107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.446 [2024-09-29 21:53:17.311152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.446 [2024-09-29 21:53:17.311162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.446 [2024-09-29 21:53:17.311170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.446 [2024-09-29 21:53:17.311178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.446 [2024-09-29 21:53:17.311225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.446 [2024-09-29 21:53:17.311235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.446 [2024-09-29 21:53:17.311244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.446 [2024-09-29 21:53:17.311254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.446 [2024-09-29 21:53:17.311376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.509 ms, result 0 00:21:59.821 00:21:59.821 00:21:59.821 21:53:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:01.839 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:01.839 21:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:01.839 [2024-09-29 21:53:20.702002] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:01.839 [2024-09-29 21:53:20.702143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77301 ] 00:22:02.097 [2024-09-29 21:53:20.850690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.097 [2024-09-29 21:53:21.063268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.356 [2024-09-29 21:53:21.337845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.356 [2024-09-29 21:53:21.338106] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.617 [2024-09-29 21:53:21.493532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.493586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:02.617 [2024-09-29 21:53:21.493602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.617 [2024-09-29 21:53:21.493615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.493662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.493672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.617 [2024-09-29 21:53:21.493681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:02.617 [2024-09-29 21:53:21.493689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.493709] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:02.617 [2024-09-29 21:53:21.494495] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:02.617 [2024-09-29 21:53:21.494515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.494524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.617 [2024-09-29 21:53:21.494533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:22:02.617 [2024-09-29 21:53:21.494541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.495895] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:02.617 [2024-09-29 21:53:21.508838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.508871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:02.617 [2024-09-29 21:53:21.508883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.944 ms 00:22:02.617 [2024-09-29 21:53:21.508892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.508945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.508955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:02.617 [2024-09-29 21:53:21.508964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:02.617 [2024-09-29 21:53:21.508971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.515578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.515608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.617 [2024-09-29 21:53:21.515618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.555 ms 00:22:02.617 [2024-09-29 21:53:21.515626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.515699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.515709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.617 [2024-09-29 21:53:21.515718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:02.617 [2024-09-29 21:53:21.515725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.515779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.515790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:02.617 [2024-09-29 21:53:21.515798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:02.617 [2024-09-29 21:53:21.515806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.515827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.617 [2024-09-29 21:53:21.519497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.519527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.617 [2024-09-29 21:53:21.519537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:22:02.617 [2024-09-29 21:53:21.519544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.519574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.519582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:02.617 [2024-09-29 21:53:21.519591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:02.617 [2024-09-29 21:53:21.519599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.519621] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:02.617 [2024-09-29 21:53:21.519641] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:02.617 [2024-09-29 21:53:21.519677] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:02.617 [2024-09-29 21:53:21.519693] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:02.617 [2024-09-29 21:53:21.519798] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:02.617 [2024-09-29 21:53:21.519809] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:02.617 [2024-09-29 21:53:21.519821] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:02.617 [2024-09-29 21:53:21.519834] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:02.617 [2024-09-29 21:53:21.519844] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:02.617 [2024-09-29 21:53:21.519852] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:02.617 [2024-09-29 21:53:21.519859] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:02.617 [2024-09-29 21:53:21.519867] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:02.617 [2024-09-29 21:53:21.519876] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:02.617 [2024-09-29 21:53:21.519884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.519891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:02.617 [2024-09-29 21:53:21.519900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:22:02.617 [2024-09-29 21:53:21.519907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.519989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.617 [2024-09-29 21:53:21.520000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:02.617 [2024-09-29 21:53:21.520008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:02.617 [2024-09-29 21:53:21.520015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.617 [2024-09-29 21:53:21.520128] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:02.617 [2024-09-29 21:53:21.520140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:02.617 [2024-09-29 21:53:21.520149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.617 [2024-09-29 21:53:21.520156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.617 [2024-09-29 21:53:21.520165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:02.617 [2024-09-29 21:53:21.520172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:02.617 [2024-09-29 21:53:21.520179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:02.617 [2024-09-29 21:53:21.520187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:02.617 [2024-09-29 21:53:21.520194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:02.617 [2024-09-29 21:53:21.520201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.617 [2024-09-29 21:53:21.520210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:02.617 [2024-09-29 21:53:21.520217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:02.617 [2024-09-29 21:53:21.520224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:02.618 [2024-09-29 21:53:21.520236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:02.618 [2024-09-29 21:53:21.520243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:02.618 [2024-09-29 21:53:21.520250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:02.618 [2024-09-29 21:53:21.520266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:02.618 [2024-09-29 21:53:21.520273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:02.618 [2024-09-29 21:53:21.520287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.618 [2024-09-29 21:53:21.520300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:02.618 [2024-09-29 21:53:21.520307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.618 [2024-09-29 21:53:21.520322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:02.618 [2024-09-29 21:53:21.520328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.618 [2024-09-29 21:53:21.520341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:02.618 [2024-09-29 21:53:21.520348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:02.618 [2024-09-29 21:53:21.520361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:02.618 [2024-09-29 21:53:21.520368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.618 [2024-09-29 21:53:21.520382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:02.618 [2024-09-29 21:53:21.520407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:02.618 [2024-09-29 21:53:21.520414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:02.618 [2024-09-29 21:53:21.520421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:02.618 [2024-09-29 21:53:21.520428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:02.618 [2024-09-29 21:53:21.520435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:02.618 [2024-09-29 21:53:21.520449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:02.618 [2024-09-29 21:53:21.520457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520463] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:02.618 [2024-09-29 21:53:21.520472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:02.618 [2024-09-29 21:53:21.520481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:02.618 [2024-09-29 21:53:21.520490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:02.618 [2024-09-29 21:53:21.520497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:02.618 [2024-09-29 21:53:21.520505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:02.618 [2024-09-29 21:53:21.520512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:02.618 [2024-09-29 21:53:21.520520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:02.618 [2024-09-29 21:53:21.520527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:02.618 [2024-09-29 21:53:21.520534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:02.618 [2024-09-29 21:53:21.520542] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:02.618 [2024-09-29 21:53:21.520552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.618 [2024-09-29 21:53:21.520560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:02.618 [2024-09-29 21:53:21.520568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:02.618 [2024-09-29 21:53:21.520575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:02.618 [2024-09-29 21:53:21.520582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:02.618 [2024-09-29 21:53:21.520590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:02.618 [2024-09-29 21:53:21.520597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:02.618 [2024-09-29 21:53:21.520604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:02.618 [2024-09-29 21:53:21.520611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:02.618 [2024-09-29 21:53:21.520617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:02.618 [2024-09-29 21:53:21.520625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:02.618 [2024-09-29 21:53:21.520632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:02.618 [2024-09-29 21:53:21.520639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:02.618 [2024-09-29 21:53:21.520647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:02.618 [2024-09-29 21:53:21.520655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:02.618 [2024-09-29 21:53:21.520661] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:02.618 [2024-09-29 21:53:21.520671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:02.618 [2024-09-29 21:53:21.520679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:02.618 [2024-09-29 21:53:21.520686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:02.618 [2024-09-29 21:53:21.520692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:02.618 [2024-09-29 21:53:21.520700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:02.618 [2024-09-29 21:53:21.520707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.618 [2024-09-29 21:53:21.520715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:02.618 [2024-09-29 21:53:21.520722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:22:02.618 [2024-09-29 21:53:21.520729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.618 [2024-09-29 21:53:21.557651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.618 [2024-09-29 21:53:21.557702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.618 [2024-09-29 21:53:21.557716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.874 ms 00:22:02.618 [2024-09-29 21:53:21.557726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.618 [2024-09-29 21:53:21.557840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.618 [2024-09-29 21:53:21.557850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:02.618 [2024-09-29 21:53:21.557859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:02.618 [2024-09-29 21:53:21.557869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.618 [2024-09-29 21:53:21.590722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.618 [2024-09-29 21:53:21.590876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.618 [2024-09-29 21:53:21.590898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.780 ms 00:22:02.618 [2024-09-29 21:53:21.590907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.618 [2024-09-29 21:53:21.590951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.618 [2024-09-29 21:53:21.590960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.618 [2024-09-29 21:53:21.590969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:02.618 [2024-09-29 21:53:21.590977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.618 [2024-09-29 21:53:21.591470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.619 [2024-09-29 21:53:21.591488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.619 [2024-09-29 21:53:21.591497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:22:02.619 [2024-09-29 21:53:21.591509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.619 [2024-09-29 21:53:21.591646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.619 [2024-09-29 21:53:21.591656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.619 [2024-09-29 21:53:21.591664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:22:02.619 [2024-09-29 21:53:21.591671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.605082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.605113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.878 [2024-09-29 21:53:21.605124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.391 ms 00:22:02.878 [2024-09-29 21:53:21.605132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.618084] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:02.878 [2024-09-29 21:53:21.618205] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:02.878 [2024-09-29 21:53:21.618222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.618230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:02.878 [2024-09-29 21:53:21.618240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.992 ms 00:22:02.878 [2024-09-29 21:53:21.618247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.651519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.651563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:02.878 [2024-09-29 21:53:21.651576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.229 ms 00:22:02.878 [2024-09-29 21:53:21.651585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.663249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.663283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:02.878 [2024-09-29 21:53:21.663295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.624 ms 00:22:02.878 [2024-09-29 21:53:21.663302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.674914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.674944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:02.878 [2024-09-29 21:53:21.674954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.568 ms 00:22:02.878 [2024-09-29 21:53:21.674962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.675600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.675622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:02.878 [2024-09-29 21:53:21.675632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:22:02.878 [2024-09-29 21:53:21.675640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.734439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.734499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:02.878 [2024-09-29 21:53:21.734513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.779 ms 00:22:02.878 [2024-09-29 21:53:21.734522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.745332] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:02.878 [2024-09-29 21:53:21.748259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.748291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.878 [2024-09-29 21:53:21.748304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.686 ms 00:22:02.878 [2024-09-29 21:53:21.748317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.748421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.748434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:02.878 [2024-09-29 21:53:21.748445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:02.878 [2024-09-29 21:53:21.748453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.749160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.749193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:02.878 [2024-09-29 21:53:21.749203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:22:02.878 [2024-09-29 21:53:21.749211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.749239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.749249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:02.878 [2024-09-29 21:53:21.749257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:02.878 [2024-09-29 21:53:21.749265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.749300] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:02.878 [2024-09-29 21:53:21.749309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.749317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:02.878 [2024-09-29 21:53:21.749329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:02.878 [2024-09-29 21:53:21.749337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.772825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.772859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:02.878 [2024-09-29 21:53:21.772871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.469 ms 00:22:02.878 [2024-09-29 21:53:21.772879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.772952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-09-29 21:53:21.772962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:02.878 [2024-09-29 21:53:21.772971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:02.878 [2024-09-29 21:53:21.772979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-09-29 21:53:21.773998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.006 ms, result 0 00:22:25.267  Copying: 43/1024 [MB] (43 MBps) Copying: 89/1024 [MB] (45 MBps) Copying: 135/1024 [MB] (45 MBps) Copying: 183/1024 [MB] (47 MBps) Copying: 229/1024 [MB] (46 MBps) Copying: 276/1024 [MB] (46 MBps) Copying: 324/1024 [MB] (48 MBps) Copying: 371/1024 [MB] (47 MBps) Copying: 419/1024 [MB] (47 MBps) Copying: 468/1024 [MB] (48 MBps) Copying: 511/1024 [MB] (43 MBps) Copying: 559/1024 [MB] (47 MBps) Copying: 607/1024 [MB] (47 MBps) Copying: 655/1024 [MB] (48 MBps) Copying: 702/1024 [MB] (46 MBps) Copying: 748/1024 [MB] (45 MBps) Copying: 795/1024 [MB] (47 MBps) Copying: 841/1024 [MB] (45 MBps) Copying: 886/1024 [MB] (45 MBps) Copying: 933/1024 [MB] (46 MBps) Copying: 979/1024 [MB] (45 MBps) Copying: 1024/1024 [MB] (average 46 MBps)[2024-09-29 21:53:44.028559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.267 [2024-09-29 21:53:44.028647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:25.267 [2024-09-29 21:53:44.028669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:25.267 [2024-09-29 21:53:44.028698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.267 [2024-09-29 21:53:44.028732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:25.267 [2024-09-29 21:53:44.035997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.267 [2024-09-29 21:53:44.036048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:25.267 [2024-09-29 21:53:44.036064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.240 ms 00:22:25.267 [2024-09-29 21:53:44.036078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.267 [2024-09-29 21:53:44.036447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.267 [2024-09-29 21:53:44.036470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:25.267 [2024-09-29 21:53:44.036484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:22:25.267 [2024-09-29 21:53:44.036496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.267 [2024-09-29 21:53:44.040748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.267 [2024-09-29 21:53:44.040770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:25.267 [2024-09-29 21:53:44.040780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.227 ms 00:22:25.267 [2024-09-29 21:53:44.040789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.268 [2024-09-29 21:53:44.046960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.268 [2024-09-29 21:53:44.047148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:25.268 [2024-09-29 21:53:44.047166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:22:25.268 [2024-09-29 21:53:44.047175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.268 [2024-09-29 21:53:44.071469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.268 [2024-09-29 21:53:44.071509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:25.268 [2024-09-29 21:53:44.071521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.222 ms 00:22:25.268 [2024-09-29 21:53:44.071529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.268 [2024-09-29 21:53:44.085858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.268 [2024-09-29 21:53:44.086038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:25.268 [2024-09-29 21:53:44.086057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.301 ms 00:22:25.268 [2024-09-29 21:53:44.086066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.268 [2024-09-29 21:53:44.088097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.268 [2024-09-29 21:53:44.088144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:25.268 [2024-09-29 21:53:44.088157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.002 ms 00:22:25.268 [2024-09-29 21:53:44.088165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.268 [2024-09-29 21:53:44.111510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.268 [2024-09-29 21:53:44.111545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:25.268 [2024-09-29 21:53:44.111556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.329 ms 00:22:25.268 [2024-09-29 21:53:44.111564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.268 [2024-09-29 21:53:44.134457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.268 [2024-09-29 21:53:44.134493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:25.268 [2024-09-29 21:53:44.134504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:22:25.268 [2024-09-29 21:53:44.134511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.268 [2024-09-29 21:53:44.156838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.268 [2024-09-29 21:53:44.157044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:25.268 [2024-09-29 21:53:44.157062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.307 ms 00:22:25.268 [2024-09-29 21:53:44.157070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.268 [2024-09-29 21:53:44.178986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.268 [2024-09-29 21:53:44.179018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:25.268 [2024-09-29 21:53:44.179029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.868 ms 00:22:25.268 [2024-09-29 21:53:44.179037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.268 [2024-09-29 21:53:44.179056] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:25.268 [2024-09-29 21:53:44.179070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:25.268 [2024-09-29 21:53:44.179080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:22:25.268 [2024-09-29 21:53:44.179089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:25.268 [2024-09-29 21:53:44.179561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:25.269 [2024-09-29 21:53:44.179892] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:25.269 [2024-09-29 21:53:44.179900] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3d6e5af9-4fd9-4dfc-bde3-3b26bab28ab1 00:22:25.269 [2024-09-29 21:53:44.179907] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:22:25.269 [2024-09-29 21:53:44.179914] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:25.269 [2024-09-29 21:53:44.179922] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:25.269 [2024-09-29 21:53:44.179930] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:25.269 [2024-09-29 21:53:44.179936] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:25.269 [2024-09-29 21:53:44.179949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:25.269 [2024-09-29 21:53:44.179956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:25.269 [2024-09-29 21:53:44.179963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:25.269 [2024-09-29 21:53:44.179969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:25.269 [2024-09-29 21:53:44.179976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.269 [2024-09-29 21:53:44.179990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:25.269 [2024-09-29 21:53:44.180000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:22:25.269 [2024-09-29 21:53:44.180007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.269 [2024-09-29 21:53:44.192687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.269 [2024-09-29 21:53:44.192720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:25.269 [2024-09-29 21:53:44.192731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.665 ms 00:22:25.269 [2024-09-29 21:53:44.192744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.269 [2024-09-29 21:53:44.193103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.269 [2024-09-29 21:53:44.193113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:25.269 [2024-09-29 21:53:44.193121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:22:25.269 [2024-09-29 21:53:44.193128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.269 [2024-09-29 21:53:44.222287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.269 [2024-09-29 21:53:44.222336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:25.269 [2024-09-29 21:53:44.222348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.269 [2024-09-29 21:53:44.222356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.269 [2024-09-29 21:53:44.222437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.269 [2024-09-29 21:53:44.222446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:25.269 [2024-09-29 21:53:44.222454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.269 [2024-09-29 21:53:44.222462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.269 [2024-09-29 21:53:44.222531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.269 [2024-09-29 21:53:44.222542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:25.269 [2024-09-29 21:53:44.222550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.269 [2024-09-29 21:53:44.222562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.269 [2024-09-29 21:53:44.222578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.269 [2024-09-29 21:53:44.222586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:25.269 [2024-09-29 21:53:44.222594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.269 [2024-09-29 21:53:44.222601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.529 [2024-09-29 21:53:44.302270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.529 [2024-09-29 21:53:44.302332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:25.529 [2024-09-29 21:53:44.302350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.529 [2024-09-29 21:53:44.302359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.529 [2024-09-29 21:53:44.366812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.529 [2024-09-29 21:53:44.367026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:25.529 [2024-09-29 21:53:44.367044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.529 [2024-09-29 21:53:44.367053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.529 [2024-09-29 21:53:44.367146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.529 [2024-09-29 21:53:44.367157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.529 [2024-09-29 21:53:44.367165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.529 [2024-09-29 21:53:44.367173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.529 [2024-09-29 21:53:44.367214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.529 [2024-09-29 21:53:44.367223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.529 [2024-09-29 21:53:44.367231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.529 [2024-09-29 21:53:44.367239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.529 [2024-09-29 21:53:44.367334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.529 [2024-09-29 21:53:44.367345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.529 [2024-09-29 21:53:44.367353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.529 [2024-09-29 21:53:44.367361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.529 [2024-09-29 21:53:44.367418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.529 [2024-09-29 21:53:44.367429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:25.529 [2024-09-29 21:53:44.367437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.529 [2024-09-29 21:53:44.367445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.529 [2024-09-29 21:53:44.367483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.529 [2024-09-29 21:53:44.367493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.529 [2024-09-29 21:53:44.367501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.529 [2024-09-29 21:53:44.367509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.529 [2024-09-29 21:53:44.367560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:25.529 [2024-09-29 21:53:44.367571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.529 [2024-09-29 21:53:44.367579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:25.529 [2024-09-29 21:53:44.367588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.529 [2024-09-29 21:53:44.367712] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.133 ms, result 0 00:22:26.464 00:22:26.464 00:22:26.464 21:53:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:28.422 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:22:28.422 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:22:28.422 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:22:28.422 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:28.422 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:28.719 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:28.719 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:28.719 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:28.719 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76077 00:22:28.719 21:53:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 76077 ']' 00:22:28.719 21:53:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 76077 00:22:28.719 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76077) - No such process 00:22:28.719 Process with pid 76077 is not found 00:22:28.719 21:53:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 76077 is not found' 00:22:28.719 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:22:28.981 Remove shared memory files 00:22:28.981 21:53:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:22:28.981 21:53:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:28.981 21:53:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:22:28.981 21:53:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:22:28.981 21:53:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:22:28.981 21:53:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:28.981 21:53:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:22:28.981 00:22:28.981 real 2m19.060s 00:22:28.981 user 2m34.544s 00:22:28.981 sys 0m24.026s 00:22:28.981 21:53:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.981 ************************************ 00:22:28.981 END TEST ftl_dirty_shutdown 00:22:28.981 ************************************ 00:22:28.981 21:53:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:28.981 21:53:47 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:28.981 21:53:47 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:28.981 21:53:47 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.981 21:53:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:28.981 ************************************ 00:22:28.981 START TEST ftl_upgrade_shutdown 00:22:28.981 ************************************ 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:28.981 * Looking for test storage... 00:22:28.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:22:28.981 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:28.982 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:28.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.982 --rc genhtml_branch_coverage=1 00:22:28.982 --rc genhtml_function_coverage=1 00:22:28.982 --rc genhtml_legend=1 00:22:28.982 --rc geninfo_all_blocks=1 00:22:28.982 --rc geninfo_unexecuted_blocks=1 00:22:28.982 00:22:28.982 ' 00:22:28.982 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:28.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.982 --rc genhtml_branch_coverage=1 00:22:28.982 --rc genhtml_function_coverage=1 00:22:28.982 --rc genhtml_legend=1 00:22:28.982 --rc geninfo_all_blocks=1 00:22:28.982 --rc geninfo_unexecuted_blocks=1 00:22:28.982 00:22:28.982 ' 00:22:28.982 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:28.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.982 --rc genhtml_branch_coverage=1 00:22:28.982 --rc genhtml_function_coverage=1 00:22:28.982 --rc genhtml_legend=1 00:22:28.982 --rc geninfo_all_blocks=1 00:22:28.982 --rc geninfo_unexecuted_blocks=1 00:22:28.982 00:22:28.982 ' 00:22:28.982 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:28.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:28.982 --rc genhtml_branch_coverage=1 00:22:28.982 --rc genhtml_function_coverage=1 00:22:28.982 --rc genhtml_legend=1 00:22:28.982 --rc geninfo_all_blocks=1 00:22:28.982 --rc geninfo_unexecuted_blocks=1 00:22:28.982 00:22:28.982 ' 00:22:28.982 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:28.982 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.241 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77658 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77658 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77658 ']' 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.242 21:53:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:29.242 [2024-09-29 21:53:48.056086] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:29.242 [2024-09-29 21:53:48.056475] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77658 ] 00:22:29.242 [2024-09-29 21:53:48.199240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.501 [2024-09-29 21:53:48.411789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:22:30.069 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:30.326 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:22:30.584 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:22:30.584 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:30.584 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:22:30.584 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:22:30.584 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:30.584 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:30.584 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:30.584 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:22:30.584 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:30.584 { 00:22:30.585 "name": "basen1", 00:22:30.585 "aliases": [ 00:22:30.585 "5c97b65a-f801-4f4c-aa00-3917ef710463" 00:22:30.585 ], 00:22:30.585 "product_name": "NVMe disk", 00:22:30.585 "block_size": 4096, 00:22:30.585 "num_blocks": 1310720, 00:22:30.585 "uuid": "5c97b65a-f801-4f4c-aa00-3917ef710463", 00:22:30.585 "numa_id": -1, 00:22:30.585 "assigned_rate_limits": { 00:22:30.585 "rw_ios_per_sec": 0, 00:22:30.585 "rw_mbytes_per_sec": 0, 00:22:30.585 "r_mbytes_per_sec": 0, 00:22:30.585 "w_mbytes_per_sec": 0 00:22:30.585 }, 00:22:30.585 "claimed": true, 00:22:30.585 "claim_type": "read_many_write_one", 00:22:30.585 "zoned": false, 00:22:30.585 "supported_io_types": { 00:22:30.585 "read": true, 00:22:30.585 "write": true, 00:22:30.585 "unmap": true, 00:22:30.585 "flush": true, 00:22:30.585 "reset": true, 00:22:30.585 "nvme_admin": true, 00:22:30.585 "nvme_io": true, 00:22:30.585 "nvme_io_md": false, 00:22:30.585 "write_zeroes": true, 00:22:30.585 "zcopy": false, 00:22:30.585 "get_zone_info": false, 00:22:30.585 "zone_management": false, 00:22:30.585 "zone_append": false, 00:22:30.585 "compare": true, 00:22:30.585 "compare_and_write": false, 00:22:30.585 "abort": true, 00:22:30.585 "seek_hole": false, 00:22:30.585 "seek_data": false, 00:22:30.585 "copy": true, 00:22:30.585 "nvme_iov_md": false 00:22:30.585 }, 00:22:30.585 "driver_specific": { 00:22:30.585 "nvme": [ 00:22:30.585 { 00:22:30.585 "pci_address": "0000:00:11.0", 00:22:30.585 "trid": { 00:22:30.585 "trtype": "PCIe", 00:22:30.585 "traddr": "0000:00:11.0" 00:22:30.585 }, 00:22:30.585 "ctrlr_data": { 00:22:30.585 "cntlid": 0, 00:22:30.585 "vendor_id": "0x1b36", 00:22:30.585 "model_number": "QEMU NVMe Ctrl", 00:22:30.585 "serial_number": "12341", 00:22:30.585 "firmware_revision": "8.0.0", 00:22:30.585 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:30.585 "oacs": { 00:22:30.585 "security": 0, 00:22:30.585 "format": 1, 00:22:30.585 "firmware": 0, 00:22:30.585 "ns_manage": 1 00:22:30.585 }, 00:22:30.585 "multi_ctrlr": false, 00:22:30.585 "ana_reporting": false 00:22:30.585 }, 00:22:30.585 "vs": { 00:22:30.585 "nvme_version": "1.4" 00:22:30.585 }, 00:22:30.585 "ns_data": { 00:22:30.585 "id": 1, 00:22:30.585 "can_share": false 00:22:30.585 } 00:22:30.585 } 00:22:30.585 ], 00:22:30.585 "mp_policy": "active_passive" 00:22:30.585 } 00:22:30.585 } 00:22:30.585 ]' 00:22:30.585 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=2d5b9253-2d5a-4dda-be47-f30b5df4ebd4 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:30.844 21:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2d5b9253-2d5a-4dda-be47-f30b5df4ebd4 00:22:31.101 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:22:31.360 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=a272967c-d4d9-49f2-aa3a-8aef27b202b8 00:22:31.360 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u a272967c-d4d9-49f2-aa3a-8aef27b202b8 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=78d44e52-d5e4-4b95-9380-31dfcff309ee 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 78d44e52-d5e4-4b95-9380-31dfcff309ee ]] 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 78d44e52-d5e4-4b95-9380-31dfcff309ee 5120 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=78d44e52-d5e4-4b95-9380-31dfcff309ee 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 78d44e52-d5e4-4b95-9380-31dfcff309ee 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=78d44e52-d5e4-4b95-9380-31dfcff309ee 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:31.618 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78d44e52-d5e4-4b95-9380-31dfcff309ee 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:31.877 { 00:22:31.877 "name": "78d44e52-d5e4-4b95-9380-31dfcff309ee", 00:22:31.877 "aliases": [ 00:22:31.877 "lvs/basen1p0" 00:22:31.877 ], 00:22:31.877 "product_name": "Logical Volume", 00:22:31.877 "block_size": 4096, 00:22:31.877 "num_blocks": 5242880, 00:22:31.877 "uuid": "78d44e52-d5e4-4b95-9380-31dfcff309ee", 00:22:31.877 "assigned_rate_limits": { 00:22:31.877 "rw_ios_per_sec": 0, 00:22:31.877 "rw_mbytes_per_sec": 0, 00:22:31.877 "r_mbytes_per_sec": 0, 00:22:31.877 "w_mbytes_per_sec": 0 00:22:31.877 }, 00:22:31.877 "claimed": false, 00:22:31.877 "zoned": false, 00:22:31.877 "supported_io_types": { 00:22:31.877 "read": true, 00:22:31.877 "write": true, 00:22:31.877 "unmap": true, 00:22:31.877 "flush": false, 00:22:31.877 "reset": true, 00:22:31.877 "nvme_admin": false, 00:22:31.877 "nvme_io": false, 00:22:31.877 "nvme_io_md": false, 00:22:31.877 "write_zeroes": true, 00:22:31.877 "zcopy": false, 00:22:31.877 "get_zone_info": false, 00:22:31.877 "zone_management": false, 00:22:31.877 "zone_append": false, 00:22:31.877 "compare": false, 00:22:31.877 "compare_and_write": false, 00:22:31.877 "abort": false, 00:22:31.877 "seek_hole": true, 00:22:31.877 "seek_data": true, 00:22:31.877 "copy": false, 00:22:31.877 "nvme_iov_md": false 00:22:31.877 }, 00:22:31.877 "driver_specific": { 00:22:31.877 "lvol": { 00:22:31.877 "lvol_store_uuid": "a272967c-d4d9-49f2-aa3a-8aef27b202b8", 00:22:31.877 "base_bdev": "basen1", 00:22:31.877 "thin_provision": true, 00:22:31.877 "num_allocated_clusters": 0, 00:22:31.877 "snapshot": false, 00:22:31.877 "clone": false, 00:22:31.877 "esnap_clone": false 00:22:31.877 } 00:22:31.877 } 00:22:31.877 } 00:22:31.877 ]' 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:31.877 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:22:32.135 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:22:32.135 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:22:32.135 21:53:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:22:32.393 21:53:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:22:32.393 21:53:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:22:32.394 21:53:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 78d44e52-d5e4-4b95-9380-31dfcff309ee -c cachen1p0 --l2p_dram_limit 2 00:22:32.653 [2024-09-29 21:53:51.379089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.653 [2024-09-29 21:53:51.379333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:32.653 [2024-09-29 21:53:51.379355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:32.653 [2024-09-29 21:53:51.379363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.653 [2024-09-29 21:53:51.379441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.653 [2024-09-29 21:53:51.379452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:32.653 [2024-09-29 21:53:51.379461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:22:32.653 [2024-09-29 21:53:51.379468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.653 [2024-09-29 21:53:51.379489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:32.653 [2024-09-29 21:53:51.380085] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:32.653 [2024-09-29 21:53:51.380111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.653 [2024-09-29 21:53:51.380118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:32.653 [2024-09-29 21:53:51.380126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.626 ms 00:22:32.653 [2024-09-29 21:53:51.380136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.653 [2024-09-29 21:53:51.380273] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a23b268d-30e4-4780-9e3e-707f9a289069 00:22:32.653 [2024-09-29 21:53:51.381608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.653 [2024-09-29 21:53:51.381642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:22:32.653 [2024-09-29 21:53:51.381651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:22:32.653 [2024-09-29 21:53:51.381660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.653 [2024-09-29 21:53:51.388513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.653 [2024-09-29 21:53:51.388544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:32.653 [2024-09-29 21:53:51.388552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.815 ms 00:22:32.653 [2024-09-29 21:53:51.388561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.653 [2024-09-29 21:53:51.388598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.653 [2024-09-29 21:53:51.388607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:32.653 [2024-09-29 21:53:51.388614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:22:32.653 [2024-09-29 21:53:51.388628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.653 [2024-09-29 21:53:51.388680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.653 [2024-09-29 21:53:51.388691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:32.653 [2024-09-29 21:53:51.388699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:22:32.653 [2024-09-29 21:53:51.388707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.653 [2024-09-29 21:53:51.388726] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:32.653 [2024-09-29 21:53:51.392003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.653 [2024-09-29 21:53:51.392028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:32.654 [2024-09-29 21:53:51.392038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.281 ms 00:22:32.654 [2024-09-29 21:53:51.392044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.654 [2024-09-29 21:53:51.392069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.654 [2024-09-29 21:53:51.392077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:32.654 [2024-09-29 21:53:51.392085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:32.654 [2024-09-29 21:53:51.392094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.654 [2024-09-29 21:53:51.392110] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:22:32.654 [2024-09-29 21:53:51.392222] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:32.654 [2024-09-29 21:53:51.392236] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:32.654 [2024-09-29 21:53:51.392245] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:32.654 [2024-09-29 21:53:51.392256] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392265] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392273] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:32.654 [2024-09-29 21:53:51.392279] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:32.654 [2024-09-29 21:53:51.392287] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:32.654 [2024-09-29 21:53:51.392293] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:32.654 [2024-09-29 21:53:51.392301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.654 [2024-09-29 21:53:51.392307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:32.654 [2024-09-29 21:53:51.392314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:22:32.654 [2024-09-29 21:53:51.392320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.654 [2024-09-29 21:53:51.392398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.654 [2024-09-29 21:53:51.392415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:32.654 [2024-09-29 21:53:51.392424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:22:32.654 [2024-09-29 21:53:51.392430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.654 [2024-09-29 21:53:51.392509] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:32.654 [2024-09-29 21:53:51.392518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:32.654 [2024-09-29 21:53:51.392526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:32.654 [2024-09-29 21:53:51.392545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:32.654 [2024-09-29 21:53:51.392557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:32.654 [2024-09-29 21:53:51.392564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:32.654 [2024-09-29 21:53:51.392569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:32.654 [2024-09-29 21:53:51.392584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:32.654 [2024-09-29 21:53:51.392591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:32.654 [2024-09-29 21:53:51.392605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:32.654 [2024-09-29 21:53:51.392610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:32.654 [2024-09-29 21:53:51.392631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:32.654 [2024-09-29 21:53:51.392637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:32.654 [2024-09-29 21:53:51.392650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:32.654 [2024-09-29 21:53:51.392655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:32.654 [2024-09-29 21:53:51.392668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:32.654 [2024-09-29 21:53:51.392674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:32.654 [2024-09-29 21:53:51.392686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:32.654 [2024-09-29 21:53:51.392692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:32.654 [2024-09-29 21:53:51.392703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:32.654 [2024-09-29 21:53:51.392710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:32.654 [2024-09-29 21:53:51.392723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:32.654 [2024-09-29 21:53:51.392728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:32.654 [2024-09-29 21:53:51.392740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:32.654 [2024-09-29 21:53:51.392757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:32.654 [2024-09-29 21:53:51.392773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:32.654 [2024-09-29 21:53:51.392780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392785] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:32.654 [2024-09-29 21:53:51.392792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:32.654 [2024-09-29 21:53:51.392799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:32.654 [2024-09-29 21:53:51.392814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:32.654 [2024-09-29 21:53:51.392823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:32.654 [2024-09-29 21:53:51.392830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:32.654 [2024-09-29 21:53:51.392837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:32.654 [2024-09-29 21:53:51.392842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:32.654 [2024-09-29 21:53:51.392848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:32.654 [2024-09-29 21:53:51.392857] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:32.654 [2024-09-29 21:53:51.392865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:32.654 [2024-09-29 21:53:51.392880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:32.654 [2024-09-29 21:53:51.392898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:32.654 [2024-09-29 21:53:51.392904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:32.654 [2024-09-29 21:53:51.392910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:32.654 [2024-09-29 21:53:51.392917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:32.654 [2024-09-29 21:53:51.392961] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:32.654 [2024-09-29 21:53:51.392968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:32.654 [2024-09-29 21:53:51.392983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:32.654 [2024-09-29 21:53:51.392989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:32.655 [2024-09-29 21:53:51.392997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:32.655 [2024-09-29 21:53:51.393002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:32.655 [2024-09-29 21:53:51.393011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:32.655 [2024-09-29 21:53:51.393017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.548 ms 00:22:32.655 [2024-09-29 21:53:51.393027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:32.655 [2024-09-29 21:53:51.393072] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:32.655 [2024-09-29 21:53:51.393085] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:34.555 [2024-09-29 21:53:53.495188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.555 [2024-09-29 21:53:53.495259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:34.555 [2024-09-29 21:53:53.495276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2102.104 ms 00:22:34.555 [2024-09-29 21:53:53.495286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.555 [2024-09-29 21:53:53.523375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.555 [2024-09-29 21:53:53.523440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:34.555 [2024-09-29 21:53:53.523455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.849 ms 00:22:34.555 [2024-09-29 21:53:53.523465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.555 [2024-09-29 21:53:53.523572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.555 [2024-09-29 21:53:53.523586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:34.555 [2024-09-29 21:53:53.523595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:22:34.555 [2024-09-29 21:53:53.523610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.566033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.566089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:34.813 [2024-09-29 21:53:53.566107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.369 ms 00:22:34.813 [2024-09-29 21:53:53.566119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.566179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.566191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:34.813 [2024-09-29 21:53:53.566200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:34.813 [2024-09-29 21:53:53.566210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.566709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.566737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:34.813 [2024-09-29 21:53:53.566754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.400 ms 00:22:34.813 [2024-09-29 21:53:53.566767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.566814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.566824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:34.813 [2024-09-29 21:53:53.566833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:22:34.813 [2024-09-29 21:53:53.566845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.585828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.585863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:34.813 [2024-09-29 21:53:53.585874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.959 ms 00:22:34.813 [2024-09-29 21:53:53.585883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.598139] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:34.813 [2024-09-29 21:53:53.599174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.599338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:34.813 [2024-09-29 21:53:53.599361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.197 ms 00:22:34.813 [2024-09-29 21:53:53.599369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.622564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.622604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:22:34.813 [2024-09-29 21:53:53.622620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.149 ms 00:22:34.813 [2024-09-29 21:53:53.622628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.622717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.622728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:34.813 [2024-09-29 21:53:53.622741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:22:34.813 [2024-09-29 21:53:53.622749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.645678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.645827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:22:34.813 [2024-09-29 21:53:53.645849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.882 ms 00:22:34.813 [2024-09-29 21:53:53.645858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.668443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.668587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:22:34.813 [2024-09-29 21:53:53.668607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.548 ms 00:22:34.813 [2024-09-29 21:53:53.668615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.669179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.669197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:34.813 [2024-09-29 21:53:53.669208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:22:34.813 [2024-09-29 21:53:53.669217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.740309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.740534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:22:34.813 [2024-09-29 21:53:53.740559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 71.051 ms 00:22:34.813 [2024-09-29 21:53:53.740570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.765218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.765265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:22:34.813 [2024-09-29 21:53:53.765281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.564 ms 00:22:34.813 [2024-09-29 21:53:53.765289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:34.813 [2024-09-29 21:53:53.788452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:34.813 [2024-09-29 21:53:53.788495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:22:34.813 [2024-09-29 21:53:53.788508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.118 ms 00:22:34.813 [2024-09-29 21:53:53.788516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.071 [2024-09-29 21:53:53.811216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.071 [2024-09-29 21:53:53.811256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:35.071 [2024-09-29 21:53:53.811270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.659 ms 00:22:35.071 [2024-09-29 21:53:53.811278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.071 [2024-09-29 21:53:53.811319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.071 [2024-09-29 21:53:53.811328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:35.071 [2024-09-29 21:53:53.811344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:35.071 [2024-09-29 21:53:53.811352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.071 [2024-09-29 21:53:53.811464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.071 [2024-09-29 21:53:53.811476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:35.071 [2024-09-29 21:53:53.811487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:22:35.071 [2024-09-29 21:53:53.811495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.071 [2024-09-29 21:53:53.812545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2432.985 ms, result 0 00:22:35.071 { 00:22:35.071 "name": "ftl", 00:22:35.071 "uuid": "a23b268d-30e4-4780-9e3e-707f9a289069" 00:22:35.071 } 00:22:35.071 21:53:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:22:35.071 [2024-09-29 21:53:54.019732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.071 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:22:35.328 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:22:35.586 [2024-09-29 21:53:54.392130] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:35.586 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:22:35.844 [2024-09-29 21:53:54.596505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:35.844 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:22:36.103 Fill FTL, iteration 1 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=77769 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 77769 /var/tmp/spdk.tgt.sock 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77769 ']' 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:22:36.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:36.103 21:53:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:36.103 [2024-09-29 21:53:55.014565] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:36.103 [2024-09-29 21:53:55.014847] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77769 ] 00:22:36.363 [2024-09-29 21:53:55.156587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.363 [2024-09-29 21:53:55.308719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.930 21:53:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:36.930 21:53:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:36.930 21:53:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:22:37.188 ftln1 00:22:37.188 21:53:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:22:37.188 21:53:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 77769 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77769 ']' 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77769 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77769 00:22:37.446 killing process with pid 77769 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77769' 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77769 00:22:37.446 21:53:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77769 00:22:38.821 21:53:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:22:38.821 21:53:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:38.821 [2024-09-29 21:53:57.616965] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:38.821 [2024-09-29 21:53:57.617609] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77805 ] 00:22:38.821 [2024-09-29 21:53:57.765103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.080 [2024-09-29 21:53:57.918540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.823  Copying: 268/1024 [MB] (268 MBps) Copying: 538/1024 [MB] (270 MBps) Copying: 809/1024 [MB] (271 MBps) Copying: 1024/1024 [MB] (average 269 MBps) 00:22:43.823 00:22:43.823 Calculate MD5 checksum, iteration 1 00:22:43.823 21:54:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:22:43.823 21:54:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:22:43.823 21:54:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:43.823 21:54:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:43.823 21:54:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:43.823 21:54:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:43.823 21:54:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:43.823 21:54:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:43.824 [2024-09-29 21:54:02.739276] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:43.824 [2024-09-29 21:54:02.739374] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77862 ] 00:22:44.082 [2024-09-29 21:54:02.881418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.082 [2024-09-29 21:54:03.038174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.590  Copying: 620/1024 [MB] (620 MBps) Copying: 1024/1024 [MB] (average 631 MBps) 00:22:46.590 00:22:46.590 21:54:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:22:46.590 21:54:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:49.121 Fill FTL, iteration 2 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=3ad1aa1f3c896ec6f12579b5b25b434b 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:49.121 21:54:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:49.121 [2024-09-29 21:54:07.576989] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:49.121 [2024-09-29 21:54:07.577089] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77919 ] 00:22:49.121 [2024-09-29 21:54:07.719927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.121 [2024-09-29 21:54:07.873040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.927  Copying: 270/1024 [MB] (270 MBps) Copying: 535/1024 [MB] (265 MBps) Copying: 796/1024 [MB] (261 MBps) Copying: 1024/1024 [MB] (average 264 MBps) 00:22:53.927 00:22:53.927 21:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:22:53.927 21:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:22:53.927 Calculate MD5 checksum, iteration 2 00:22:53.927 21:54:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:53.927 21:54:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:53.927 21:54:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:53.927 21:54:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:53.927 21:54:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:53.927 21:54:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:53.927 [2024-09-29 21:54:12.768318] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:53.927 [2024-09-29 21:54:12.768629] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77972 ] 00:22:54.185 [2024-09-29 21:54:12.912311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.186 [2024-09-29 21:54:13.067330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.059  Copying: 637/1024 [MB] (637 MBps) Copying: 1024/1024 [MB] (average 640 MBps) 00:22:57.059 00:22:57.059 21:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:22:57.059 21:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:59.590 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:59.590 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=461e3b04bec9f4e14d27e1f103791583 00:22:59.590 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:59.590 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:59.590 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:59.590 [2024-09-29 21:54:18.153309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.590 [2024-09-29 21:54:18.153553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:59.590 [2024-09-29 21:54:18.153574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:59.590 [2024-09-29 21:54:18.153587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.590 [2024-09-29 21:54:18.153619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.590 [2024-09-29 21:54:18.153627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:59.590 [2024-09-29 21:54:18.153634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:59.590 [2024-09-29 21:54:18.153642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.590 [2024-09-29 21:54:18.153657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.590 [2024-09-29 21:54:18.153664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:59.590 [2024-09-29 21:54:18.153672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:59.590 [2024-09-29 21:54:18.153678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.590 [2024-09-29 21:54:18.153737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.420 ms, result 0 00:22:59.590 true 00:22:59.590 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:59.590 { 00:22:59.590 "name": "ftl", 00:22:59.590 "properties": [ 00:22:59.590 { 00:22:59.590 "name": "superblock_version", 00:22:59.590 "value": 5, 00:22:59.590 "read-only": true 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "name": "base_device", 00:22:59.590 "bands": [ 00:22:59.590 { 00:22:59.590 "id": 0, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 1, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 2, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 3, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 4, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 5, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 6, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 7, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 8, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 9, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 10, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 11, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 12, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 13, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 14, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 15, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 16, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 17, 00:22:59.590 "state": "FREE", 00:22:59.590 "validity": 0.0 00:22:59.590 } 00:22:59.590 ], 00:22:59.590 "read-only": true 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "name": "cache_device", 00:22:59.590 "type": "bdev", 00:22:59.590 "chunks": [ 00:22:59.590 { 00:22:59.590 "id": 0, 00:22:59.590 "state": "INACTIVE", 00:22:59.590 "utilization": 0.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 1, 00:22:59.590 "state": "CLOSED", 00:22:59.590 "utilization": 1.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 2, 00:22:59.590 "state": "CLOSED", 00:22:59.590 "utilization": 1.0 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 3, 00:22:59.590 "state": "OPEN", 00:22:59.590 "utilization": 0.001953125 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "id": 4, 00:22:59.590 "state": "OPEN", 00:22:59.590 "utilization": 0.0 00:22:59.590 } 00:22:59.590 ], 00:22:59.590 "read-only": true 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "name": "verbose_mode", 00:22:59.590 "value": true, 00:22:59.590 "unit": "", 00:22:59.590 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:59.590 }, 00:22:59.590 { 00:22:59.590 "name": "prep_upgrade_on_shutdown", 00:22:59.590 "value": false, 00:22:59.590 "unit": "", 00:22:59.590 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:59.590 } 00:22:59.590 ] 00:22:59.590 } 00:22:59.590 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:22:59.590 [2024-09-29 21:54:18.532762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.590 [2024-09-29 21:54:18.532825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:59.590 [2024-09-29 21:54:18.532838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:22:59.590 [2024-09-29 21:54:18.532845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.590 [2024-09-29 21:54:18.532866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.590 [2024-09-29 21:54:18.532873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:59.590 [2024-09-29 21:54:18.532880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:59.590 [2024-09-29 21:54:18.532887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.590 [2024-09-29 21:54:18.532903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.590 [2024-09-29 21:54:18.532910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:59.590 [2024-09-29 21:54:18.532916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:59.590 [2024-09-29 21:54:18.532922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.591 [2024-09-29 21:54:18.532973] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.209 ms, result 0 00:22:59.591 true 00:22:59.591 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:22:59.591 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:59.591 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:59.849 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:22:59.849 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:22:59.849 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:00.108 [2024-09-29 21:54:18.961086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:00.108 [2024-09-29 21:54:18.961147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:00.108 [2024-09-29 21:54:18.961158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:00.108 [2024-09-29 21:54:18.961165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:00.108 [2024-09-29 21:54:18.961184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:00.108 [2024-09-29 21:54:18.961192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:00.108 [2024-09-29 21:54:18.961199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:00.108 [2024-09-29 21:54:18.961206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:00.108 [2024-09-29 21:54:18.961221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:00.108 [2024-09-29 21:54:18.961227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:00.108 [2024-09-29 21:54:18.961233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:00.108 [2024-09-29 21:54:18.961239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:00.108 [2024-09-29 21:54:18.961289] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.197 ms, result 0 00:23:00.108 true 00:23:00.108 21:54:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:00.367 { 00:23:00.367 "name": "ftl", 00:23:00.367 "properties": [ 00:23:00.367 { 00:23:00.367 "name": "superblock_version", 00:23:00.367 "value": 5, 00:23:00.367 "read-only": true 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "name": "base_device", 00:23:00.367 "bands": [ 00:23:00.367 { 00:23:00.367 "id": 0, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 1, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 2, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 3, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 4, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 5, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 6, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 7, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 8, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 9, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 10, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 11, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 12, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 13, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 14, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 15, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 16, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 17, 00:23:00.367 "state": "FREE", 00:23:00.367 "validity": 0.0 00:23:00.367 } 00:23:00.367 ], 00:23:00.367 "read-only": true 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "name": "cache_device", 00:23:00.367 "type": "bdev", 00:23:00.367 "chunks": [ 00:23:00.367 { 00:23:00.367 "id": 0, 00:23:00.367 "state": "INACTIVE", 00:23:00.367 "utilization": 0.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 1, 00:23:00.367 "state": "CLOSED", 00:23:00.367 "utilization": 1.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 2, 00:23:00.367 "state": "CLOSED", 00:23:00.367 "utilization": 1.0 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 3, 00:23:00.367 "state": "OPEN", 00:23:00.367 "utilization": 0.001953125 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "id": 4, 00:23:00.367 "state": "OPEN", 00:23:00.367 "utilization": 0.0 00:23:00.367 } 00:23:00.367 ], 00:23:00.367 "read-only": true 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "name": "verbose_mode", 00:23:00.367 "value": true, 00:23:00.367 "unit": "", 00:23:00.367 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:00.367 }, 00:23:00.367 { 00:23:00.367 "name": "prep_upgrade_on_shutdown", 00:23:00.367 "value": true, 00:23:00.367 "unit": "", 00:23:00.367 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:00.367 } 00:23:00.367 ] 00:23:00.367 } 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 77658 ]] 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 77658 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77658 ']' 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77658 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77658 00:23:00.367 killing process with pid 77658 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77658' 00:23:00.367 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77658 00:23:00.368 21:54:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77658 00:23:00.932 [2024-09-29 21:54:19.733236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:23:00.932 [2024-09-29 21:54:19.744776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:00.932 [2024-09-29 21:54:19.744822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:00.932 [2024-09-29 21:54:19.744834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:00.932 [2024-09-29 21:54:19.744841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:00.932 [2024-09-29 21:54:19.744860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:00.932 [2024-09-29 21:54:19.747065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:00.932 [2024-09-29 21:54:19.747098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:00.932 [2024-09-29 21:54:19.747107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.194 ms 00:23:00.932 [2024-09-29 21:54:19.747115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.052720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.052802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:09.049 [2024-09-29 21:54:27.052816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7305.552 ms 00:23:09.049 [2024-09-29 21:54:27.052824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.054160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.054188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:09.049 [2024-09-29 21:54:27.054196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.322 ms 00:23:09.049 [2024-09-29 21:54:27.054203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.055082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.055104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:23:09.049 [2024-09-29 21:54:27.055112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:23:09.049 [2024-09-29 21:54:27.055119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.062957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.062989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:09.049 [2024-09-29 21:54:27.062997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.808 ms 00:23:09.049 [2024-09-29 21:54:27.063004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.068258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.068287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:09.049 [2024-09-29 21:54:27.068296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.227 ms 00:23:09.049 [2024-09-29 21:54:27.068308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.068356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.068364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:09.049 [2024-09-29 21:54:27.068371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:23:09.049 [2024-09-29 21:54:27.068378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.075559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.075738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:23:09.049 [2024-09-29 21:54:27.075752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.156 ms 00:23:09.049 [2024-09-29 21:54:27.075757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.083186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.083308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:23:09.049 [2024-09-29 21:54:27.083320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.402 ms 00:23:09.049 [2024-09-29 21:54:27.083326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.090782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.090874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:09.049 [2024-09-29 21:54:27.090954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.430 ms 00:23:09.049 [2024-09-29 21:54:27.090972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.098049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.098143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:09.049 [2024-09-29 21:54:27.098191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.002 ms 00:23:09.049 [2024-09-29 21:54:27.098208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.098241] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:09.049 [2024-09-29 21:54:27.098679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:09.049 [2024-09-29 21:54:27.098993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:09.049 [2024-09-29 21:54:27.099304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:09.049 [2024-09-29 21:54:27.099568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.099784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.099983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.100241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.100475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.100580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.100761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.100926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.101026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.101120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.101276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.101469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.101667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.101834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.101996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:09.049 [2024-09-29 21:54:27.102151] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:09.049 [2024-09-29 21:54:27.102179] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a23b268d-30e4-4780-9e3e-707f9a289069 00:23:09.049 [2024-09-29 21:54:27.102202] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:09.049 [2024-09-29 21:54:27.102232] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:23:09.049 [2024-09-29 21:54:27.102251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:23:09.049 [2024-09-29 21:54:27.102302] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:23:09.049 [2024-09-29 21:54:27.102322] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:09.049 [2024-09-29 21:54:27.102344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:09.049 [2024-09-29 21:54:27.102364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:09.049 [2024-09-29 21:54:27.102382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:09.049 [2024-09-29 21:54:27.102421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:09.049 [2024-09-29 21:54:27.102447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.102470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:09.049 [2024-09-29 21:54:27.102493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.205 ms 00:23:09.049 [2024-09-29 21:54:27.102513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.120638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.120669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:09.049 [2024-09-29 21:54:27.120682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.055 ms 00:23:09.049 [2024-09-29 21:54:27.120690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.049 [2024-09-29 21:54:27.121084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:09.049 [2024-09-29 21:54:27.121102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:09.050 [2024-09-29 21:54:27.121112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:23:09.050 [2024-09-29 21:54:27.121119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.159661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.159872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:09.050 [2024-09-29 21:54:27.159888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.159898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.159946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.159955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:09.050 [2024-09-29 21:54:27.159964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.159972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.160068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.160080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:09.050 [2024-09-29 21:54:27.160088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.160096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.160114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.160122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:09.050 [2024-09-29 21:54:27.160130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.160137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.238846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.239080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:09.050 [2024-09-29 21:54:27.239099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.239108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.303867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.303920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:09.050 [2024-09-29 21:54:27.303933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.303942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.304049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.304065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:09.050 [2024-09-29 21:54:27.304074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.304082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.304125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.304134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:09.050 [2024-09-29 21:54:27.304142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.304150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.304246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.304259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:09.050 [2024-09-29 21:54:27.304268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.304276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.304305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.304315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:09.050 [2024-09-29 21:54:27.304323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.304331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.304372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.304381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:09.050 [2024-09-29 21:54:27.304410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.304418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.304468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:09.050 [2024-09-29 21:54:27.304479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:09.050 [2024-09-29 21:54:27.304487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:09.050 [2024-09-29 21:54:27.304495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:09.050 [2024-09-29 21:54:27.304620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7559.787 ms, result 0 00:23:15.610 21:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:15.610 21:54:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78162 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78162 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78162 ']' 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.611 21:54:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.611 [2024-09-29 21:54:34.004655] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:15.611 [2024-09-29 21:54:34.004761] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78162 ] 00:23:15.611 [2024-09-29 21:54:34.151195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.611 [2024-09-29 21:54:34.335844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.179 [2024-09-29 21:54:35.121327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:16.179 [2024-09-29 21:54:35.121428] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:16.439 [2024-09-29 21:54:35.272610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.439 [2024-09-29 21:54:35.272686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:16.439 [2024-09-29 21:54:35.272704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:16.439 [2024-09-29 21:54:35.272714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.439 [2024-09-29 21:54:35.272778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.439 [2024-09-29 21:54:35.272790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:16.439 [2024-09-29 21:54:35.272800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:23:16.439 [2024-09-29 21:54:35.272808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.439 [2024-09-29 21:54:35.272841] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:16.440 [2024-09-29 21:54:35.273585] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:16.440 [2024-09-29 21:54:35.273609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.273618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:16.440 [2024-09-29 21:54:35.273628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.781 ms 00:23:16.440 [2024-09-29 21:54:35.273642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.275997] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:16.440 [2024-09-29 21:54:35.291407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.291457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:16.440 [2024-09-29 21:54:35.291472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.412 ms 00:23:16.440 [2024-09-29 21:54:35.291482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.291569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.291582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:16.440 [2024-09-29 21:54:35.291591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:23:16.440 [2024-09-29 21:54:35.291600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.303450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.303492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:16.440 [2024-09-29 21:54:35.303504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.753 ms 00:23:16.440 [2024-09-29 21:54:35.303513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.303591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.303603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:16.440 [2024-09-29 21:54:35.303617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:23:16.440 [2024-09-29 21:54:35.303626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.303696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.303707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:16.440 [2024-09-29 21:54:35.303716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:23:16.440 [2024-09-29 21:54:35.303725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.303752] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:16.440 [2024-09-29 21:54:35.308365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.308414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:16.440 [2024-09-29 21:54:35.308426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.619 ms 00:23:16.440 [2024-09-29 21:54:35.308435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.308467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.308477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:16.440 [2024-09-29 21:54:35.308490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:16.440 [2024-09-29 21:54:35.308499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.308542] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:16.440 [2024-09-29 21:54:35.308573] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:23:16.440 [2024-09-29 21:54:35.308615] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:16.440 [2024-09-29 21:54:35.308634] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:23:16.440 [2024-09-29 21:54:35.308755] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:16.440 [2024-09-29 21:54:35.308773] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:16.440 [2024-09-29 21:54:35.308785] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:16.440 [2024-09-29 21:54:35.308797] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:16.440 [2024-09-29 21:54:35.308808] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:16.440 [2024-09-29 21:54:35.308817] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:16.440 [2024-09-29 21:54:35.308826] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:16.440 [2024-09-29 21:54:35.308834] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:16.440 [2024-09-29 21:54:35.308843] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:16.440 [2024-09-29 21:54:35.308852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.308861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:16.440 [2024-09-29 21:54:35.308872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.316 ms 00:23:16.440 [2024-09-29 21:54:35.308883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.308968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.440 [2024-09-29 21:54:35.308979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:16.440 [2024-09-29 21:54:35.308987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:23:16.440 [2024-09-29 21:54:35.308996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.440 [2024-09-29 21:54:35.309104] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:16.440 [2024-09-29 21:54:35.309118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:16.440 [2024-09-29 21:54:35.309127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:16.440 [2024-09-29 21:54:35.309135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:16.440 [2024-09-29 21:54:35.309156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:16.440 [2024-09-29 21:54:35.309173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:16.440 [2024-09-29 21:54:35.309181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:16.440 [2024-09-29 21:54:35.309187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:16.440 [2024-09-29 21:54:35.309201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:16.440 [2024-09-29 21:54:35.309208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:16.440 [2024-09-29 21:54:35.309225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:16.440 [2024-09-29 21:54:35.309234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:16.440 [2024-09-29 21:54:35.309249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:16.440 [2024-09-29 21:54:35.309256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:16.440 [2024-09-29 21:54:35.309271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:16.440 [2024-09-29 21:54:35.309278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:16.440 [2024-09-29 21:54:35.309285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:16.440 [2024-09-29 21:54:35.309292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:16.440 [2024-09-29 21:54:35.309309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:16.440 [2024-09-29 21:54:35.309316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:16.440 [2024-09-29 21:54:35.309323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:16.440 [2024-09-29 21:54:35.309330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:16.440 [2024-09-29 21:54:35.309337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:16.440 [2024-09-29 21:54:35.309346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:16.440 [2024-09-29 21:54:35.309353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:16.440 [2024-09-29 21:54:35.309360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:16.440 [2024-09-29 21:54:35.309367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:16.440 [2024-09-29 21:54:35.309374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:16.440 [2024-09-29 21:54:35.309406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:16.440 [2024-09-29 21:54:35.309414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:16.440 [2024-09-29 21:54:35.309429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:16.440 [2024-09-29 21:54:35.309451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:16.440 [2024-09-29 21:54:35.309458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309466] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:16.440 [2024-09-29 21:54:35.309475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:16.440 [2024-09-29 21:54:35.309483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:16.440 [2024-09-29 21:54:35.309490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:16.440 [2024-09-29 21:54:35.309502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:16.441 [2024-09-29 21:54:35.309510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:16.441 [2024-09-29 21:54:35.309518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:16.441 [2024-09-29 21:54:35.309527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:16.441 [2024-09-29 21:54:35.309534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:16.441 [2024-09-29 21:54:35.309541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:16.441 [2024-09-29 21:54:35.309549] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:16.441 [2024-09-29 21:54:35.309560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:16.441 [2024-09-29 21:54:35.309577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:16.441 [2024-09-29 21:54:35.309600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:16.441 [2024-09-29 21:54:35.309608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:16.441 [2024-09-29 21:54:35.309615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:16.441 [2024-09-29 21:54:35.309622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:16.441 [2024-09-29 21:54:35.309676] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:16.441 [2024-09-29 21:54:35.309686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:16.441 [2024-09-29 21:54:35.309702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:16.441 [2024-09-29 21:54:35.309711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:16.441 [2024-09-29 21:54:35.309719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:16.441 [2024-09-29 21:54:35.309728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:16.441 [2024-09-29 21:54:35.309736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:16.441 [2024-09-29 21:54:35.309747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.697 ms 00:23:16.441 [2024-09-29 21:54:35.309755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:16.441 [2024-09-29 21:54:35.309806] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:23:16.441 [2024-09-29 21:54:35.309818] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:23:20.644 [2024-09-29 21:54:39.043352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.043413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:23:20.644 [2024-09-29 21:54:39.043426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3733.533 ms 00:23:20.644 [2024-09-29 21:54:39.043439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.067141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.067187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:20.644 [2024-09-29 21:54:39.067199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.525 ms 00:23:20.644 [2024-09-29 21:54:39.067206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.067281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.067289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:20.644 [2024-09-29 21:54:39.067298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:23:20.644 [2024-09-29 21:54:39.067305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.108407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.108443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:20.644 [2024-09-29 21:54:39.108454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.070 ms 00:23:20.644 [2024-09-29 21:54:39.108461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.108492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.108499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:20.644 [2024-09-29 21:54:39.108506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:20.644 [2024-09-29 21:54:39.108512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.108955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.108971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:20.644 [2024-09-29 21:54:39.108978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:23:20.644 [2024-09-29 21:54:39.108984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.109019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.109028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:20.644 [2024-09-29 21:54:39.109035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:23:20.644 [2024-09-29 21:54:39.109040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.121498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.121523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:20.644 [2024-09-29 21:54:39.121531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.439 ms 00:23:20.644 [2024-09-29 21:54:39.121537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.132273] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:20.644 [2024-09-29 21:54:39.132405] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:20.644 [2024-09-29 21:54:39.132420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.132427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:23:20.644 [2024-09-29 21:54:39.132434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.792 ms 00:23:20.644 [2024-09-29 21:54:39.132439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.143363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.143398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:23:20.644 [2024-09-29 21:54:39.143408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.846 ms 00:23:20.644 [2024-09-29 21:54:39.143414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.152279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.152302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:23:20.644 [2024-09-29 21:54:39.152310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.829 ms 00:23:20.644 [2024-09-29 21:54:39.152315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.161291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.161315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:23:20.644 [2024-09-29 21:54:39.161323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.948 ms 00:23:20.644 [2024-09-29 21:54:39.161328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.161798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.161816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:20.644 [2024-09-29 21:54:39.161823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.399 ms 00:23:20.644 [2024-09-29 21:54:39.161830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.210384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.210425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:20.644 [2024-09-29 21:54:39.210436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.539 ms 00:23:20.644 [2024-09-29 21:54:39.210443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.218885] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:20.644 [2024-09-29 21:54:39.219548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.219570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:20.644 [2024-09-29 21:54:39.219582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.067 ms 00:23:20.644 [2024-09-29 21:54:39.219588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.219661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.219669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:23:20.644 [2024-09-29 21:54:39.219676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:23:20.644 [2024-09-29 21:54:39.219682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.644 [2024-09-29 21:54:39.219721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.644 [2024-09-29 21:54:39.219729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:20.645 [2024-09-29 21:54:39.219735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:23:20.645 [2024-09-29 21:54:39.219744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.645 [2024-09-29 21:54:39.219763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.645 [2024-09-29 21:54:39.219770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:20.645 [2024-09-29 21:54:39.219777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:20.645 [2024-09-29 21:54:39.219783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.645 [2024-09-29 21:54:39.219813] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:20.645 [2024-09-29 21:54:39.219821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.645 [2024-09-29 21:54:39.219827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:20.645 [2024-09-29 21:54:39.219834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:20.645 [2024-09-29 21:54:39.219840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.645 [2024-09-29 21:54:39.237603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.645 [2024-09-29 21:54:39.237712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:20.645 [2024-09-29 21:54:39.237727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.746 ms 00:23:20.645 [2024-09-29 21:54:39.237735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.645 [2024-09-29 21:54:39.237793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:20.645 [2024-09-29 21:54:39.237801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:20.645 [2024-09-29 21:54:39.237808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:23:20.645 [2024-09-29 21:54:39.237816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:20.645 [2024-09-29 21:54:39.238747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3965.758 ms, result 0 00:23:20.645 [2024-09-29 21:54:39.253993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.645 [2024-09-29 21:54:39.269979] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:20.645 [2024-09-29 21:54:39.278115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:21.256 21:54:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.256 21:54:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:21.256 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:21.256 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:23:21.256 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:21.256 [2024-09-29 21:54:40.190701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:21.256 [2024-09-29 21:54:40.190747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:21.256 [2024-09-29 21:54:40.190760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:21.256 [2024-09-29 21:54:40.190768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:21.256 [2024-09-29 21:54:40.190786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:21.256 [2024-09-29 21:54:40.190794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:21.256 [2024-09-29 21:54:40.190801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:21.256 [2024-09-29 21:54:40.190807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:21.256 [2024-09-29 21:54:40.190823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:21.256 [2024-09-29 21:54:40.190833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:21.256 [2024-09-29 21:54:40.190840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:21.256 [2024-09-29 21:54:40.190846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:21.256 [2024-09-29 21:54:40.190896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.192 ms, result 0 00:23:21.256 true 00:23:21.537 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:21.537 { 00:23:21.537 "name": "ftl", 00:23:21.537 "properties": [ 00:23:21.537 { 00:23:21.537 "name": "superblock_version", 00:23:21.537 "value": 5, 00:23:21.537 "read-only": true 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "name": "base_device", 00:23:21.537 "bands": [ 00:23:21.537 { 00:23:21.537 "id": 0, 00:23:21.537 "state": "CLOSED", 00:23:21.537 "validity": 1.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 1, 00:23:21.537 "state": "CLOSED", 00:23:21.537 "validity": 1.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 2, 00:23:21.537 "state": "CLOSED", 00:23:21.537 "validity": 0.007843137254901933 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 3, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 4, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 5, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 6, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 7, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 8, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 9, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 10, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 11, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 12, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 13, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 14, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 15, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 16, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 17, 00:23:21.537 "state": "FREE", 00:23:21.537 "validity": 0.0 00:23:21.537 } 00:23:21.537 ], 00:23:21.537 "read-only": true 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "name": "cache_device", 00:23:21.537 "type": "bdev", 00:23:21.537 "chunks": [ 00:23:21.537 { 00:23:21.537 "id": 0, 00:23:21.537 "state": "INACTIVE", 00:23:21.537 "utilization": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 1, 00:23:21.537 "state": "OPEN", 00:23:21.537 "utilization": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 2, 00:23:21.537 "state": "OPEN", 00:23:21.537 "utilization": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 3, 00:23:21.537 "state": "FREE", 00:23:21.537 "utilization": 0.0 00:23:21.537 }, 00:23:21.537 { 00:23:21.537 "id": 4, 00:23:21.537 "state": "FREE", 00:23:21.537 "utilization": 0.0 00:23:21.537 } 00:23:21.537 ], 00:23:21.537 "read-only": true 00:23:21.537 }, 00:23:21.537 { 00:23:21.538 "name": "verbose_mode", 00:23:21.538 "value": true, 00:23:21.538 "unit": "", 00:23:21.538 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:21.538 }, 00:23:21.538 { 00:23:21.538 "name": "prep_upgrade_on_shutdown", 00:23:21.538 "value": false, 00:23:21.538 "unit": "", 00:23:21.538 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:21.538 } 00:23:21.538 ] 00:23:21.538 } 00:23:21.538 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:23:21.538 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:23:21.538 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:21.797 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:23:21.797 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:23:21.797 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:23:21.797 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:23:21.797 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:22.055 Validate MD5 checksum, iteration 1 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:22.055 21:54:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:22.055 [2024-09-29 21:54:40.910606] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:22.056 [2024-09-29 21:54:40.910869] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78255 ] 00:23:22.314 [2024-09-29 21:54:41.060008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.314 [2024-09-29 21:54:41.271824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.730  Copying: 546/1024 [MB] (546 MBps) Copying: 1024/1024 [MB] (average 577 MBps) 00:23:25.730 00:23:25.730 21:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:25.730 21:54:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3ad1aa1f3c896ec6f12579b5b25b434b 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3ad1aa1f3c896ec6f12579b5b25b434b != \3\a\d\1\a\a\1\f\3\c\8\9\6\e\c\6\f\1\2\5\7\9\b\5\b\2\5\b\4\3\4\b ]] 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:28.263 Validate MD5 checksum, iteration 2 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:28.263 21:54:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:28.263 [2024-09-29 21:54:46.795233] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:28.263 [2024-09-29 21:54:46.795538] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78324 ] 00:23:28.263 [2024-09-29 21:54:46.947664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.263 [2024-09-29 21:54:47.160858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.677  Copying: 563/1024 [MB] (563 MBps) Copying: 1024/1024 [MB] (average 555 MBps) 00:23:31.677 00:23:31.677 21:54:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:31.677 21:54:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=461e3b04bec9f4e14d27e1f103791583 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 461e3b04bec9f4e14d27e1f103791583 != \4\6\1\e\3\b\0\4\b\e\c\9\f\4\e\1\4\d\2\7\e\1\f\1\0\3\7\9\1\5\8\3 ]] 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 78162 ]] 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 78162 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78391 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78391 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78391 ']' 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.206 21:54:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.206 [2024-09-29 21:54:52.660064] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:34.207 [2024-09-29 21:54:52.660184] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78391 ] 00:23:34.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 78162 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:23:34.207 [2024-09-29 21:54:52.807241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.207 [2024-09-29 21:54:52.982972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.772 [2024-09-29 21:54:53.609146] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:34.772 [2024-09-29 21:54:53.609200] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:34.772 [2024-09-29 21:54:53.753356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.772 [2024-09-29 21:54:53.753418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:34.772 [2024-09-29 21:54:53.753433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:34.772 [2024-09-29 21:54:53.753441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.772 [2024-09-29 21:54:53.753489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.772 [2024-09-29 21:54:53.753499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:34.772 [2024-09-29 21:54:53.753508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:23:34.772 [2024-09-29 21:54:53.753515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.772 [2024-09-29 21:54:53.753565] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:34.772 [2024-09-29 21:54:53.754226] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:34.772 [2024-09-29 21:54:53.754249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:34.772 [2024-09-29 21:54:53.754258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:34.772 [2024-09-29 21:54:53.754266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.717 ms 00:23:34.772 [2024-09-29 21:54:53.754294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.772 [2024-09-29 21:54:53.754605] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:35.031 [2024-09-29 21:54:53.770894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.031 [2024-09-29 21:54:53.770929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:35.031 [2024-09-29 21:54:53.770944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.289 ms 00:23:35.031 [2024-09-29 21:54:53.770951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.031 [2024-09-29 21:54:53.780109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.031 [2024-09-29 21:54:53.780138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:35.031 [2024-09-29 21:54:53.780148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:23:35.031 [2024-09-29 21:54:53.780156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.031 [2024-09-29 21:54:53.780497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.031 [2024-09-29 21:54:53.780517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:35.031 [2024-09-29 21:54:53.780527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:23:35.031 [2024-09-29 21:54:53.780535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.031 [2024-09-29 21:54:53.780584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.031 [2024-09-29 21:54:53.780736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:35.031 [2024-09-29 21:54:53.780750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:23:35.031 [2024-09-29 21:54:53.780760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.031 [2024-09-29 21:54:53.780791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.031 [2024-09-29 21:54:53.780801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:35.031 [2024-09-29 21:54:53.780812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:23:35.031 [2024-09-29 21:54:53.780820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.031 [2024-09-29 21:54:53.780844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:35.031 [2024-09-29 21:54:53.783763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.031 [2024-09-29 21:54:53.783893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:35.031 [2024-09-29 21:54:53.783908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.924 ms 00:23:35.031 [2024-09-29 21:54:53.783916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.031 [2024-09-29 21:54:53.783943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.031 [2024-09-29 21:54:53.783952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:35.031 [2024-09-29 21:54:53.783960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:23:35.031 [2024-09-29 21:54:53.783967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.031 [2024-09-29 21:54:53.783988] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:35.031 [2024-09-29 21:54:53.784008] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:23:35.031 [2024-09-29 21:54:53.784045] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:35.031 [2024-09-29 21:54:53.784061] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:23:35.031 [2024-09-29 21:54:53.784164] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:35.031 [2024-09-29 21:54:53.784176] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:35.031 [2024-09-29 21:54:53.784188] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:35.031 [2024-09-29 21:54:53.784197] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784206] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784214] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:35.031 [2024-09-29 21:54:53.784225] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:35.031 [2024-09-29 21:54:53.784232] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:35.031 [2024-09-29 21:54:53.784239] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:35.031 [2024-09-29 21:54:53.784247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.031 [2024-09-29 21:54:53.784255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:35.031 [2024-09-29 21:54:53.784264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:23:35.031 [2024-09-29 21:54:53.784271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.031 [2024-09-29 21:54:53.784355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.031 [2024-09-29 21:54:53.784364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:35.031 [2024-09-29 21:54:53.784373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:23:35.031 [2024-09-29 21:54:53.784382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.031 [2024-09-29 21:54:53.784514] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:35.031 [2024-09-29 21:54:53.784525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:35.031 [2024-09-29 21:54:53.784535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:35.031 [2024-09-29 21:54:53.784558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:35.031 [2024-09-29 21:54:53.784571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:35.031 [2024-09-29 21:54:53.784579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:35.031 [2024-09-29 21:54:53.784586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:35.031 [2024-09-29 21:54:53.784600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:35.031 [2024-09-29 21:54:53.784607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:35.031 [2024-09-29 21:54:53.784620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:35.031 [2024-09-29 21:54:53.784628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:35.031 [2024-09-29 21:54:53.784642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:35.031 [2024-09-29 21:54:53.784648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:35.031 [2024-09-29 21:54:53.784662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:35.031 [2024-09-29 21:54:53.784669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:35.031 [2024-09-29 21:54:53.784688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:35.031 [2024-09-29 21:54:53.784694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:35.031 [2024-09-29 21:54:53.784707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:35.031 [2024-09-29 21:54:53.784713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:35.031 [2024-09-29 21:54:53.784727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:35.031 [2024-09-29 21:54:53.784733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:35.031 [2024-09-29 21:54:53.784746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:35.031 [2024-09-29 21:54:53.784752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:35.031 [2024-09-29 21:54:53.784765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:35.031 [2024-09-29 21:54:53.784785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:35.031 [2024-09-29 21:54:53.784805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:35.031 [2024-09-29 21:54:53.784811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784817] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:35.031 [2024-09-29 21:54:53.784825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:35.031 [2024-09-29 21:54:53.784832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:35.031 [2024-09-29 21:54:53.784853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:35.031 [2024-09-29 21:54:53.784859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:35.031 [2024-09-29 21:54:53.784867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:35.031 [2024-09-29 21:54:53.784874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:35.031 [2024-09-29 21:54:53.784881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:35.031 [2024-09-29 21:54:53.784888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:35.031 [2024-09-29 21:54:53.784896] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:35.031 [2024-09-29 21:54:53.784905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.031 [2024-09-29 21:54:53.784913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:35.031 [2024-09-29 21:54:53.784920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.784927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.784935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:35.032 [2024-09-29 21:54:53.784942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:35.032 [2024-09-29 21:54:53.784949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:35.032 [2024-09-29 21:54:53.784956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:35.032 [2024-09-29 21:54:53.784962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.784969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.784976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.784983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.784991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.784998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.785005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:35.032 [2024-09-29 21:54:53.785013] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:35.032 [2024-09-29 21:54:53.785021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.785029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:35.032 [2024-09-29 21:54:53.785037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:35.032 [2024-09-29 21:54:53.785045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:35.032 [2024-09-29 21:54:53.785052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:35.032 [2024-09-29 21:54:53.785060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.785071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:35.032 [2024-09-29 21:54:53.785079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.615 ms 00:23:35.032 [2024-09-29 21:54:53.785087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.810888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.811017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:35.032 [2024-09-29 21:54:53.811033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.752 ms 00:23:35.032 [2024-09-29 21:54:53.811042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.811082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.811090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:35.032 [2024-09-29 21:54:53.811103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:23:35.032 [2024-09-29 21:54:53.811110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.856037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.856177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:35.032 [2024-09-29 21:54:53.856196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.875 ms 00:23:35.032 [2024-09-29 21:54:53.856205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.856250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.856259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:35.032 [2024-09-29 21:54:53.856268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:35.032 [2024-09-29 21:54:53.856275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.856424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.856437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:35.032 [2024-09-29 21:54:53.856446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:23:35.032 [2024-09-29 21:54:53.856453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.856504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.856513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:35.032 [2024-09-29 21:54:53.856521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:23:35.032 [2024-09-29 21:54:53.856528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.871084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.871201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:35.032 [2024-09-29 21:54:53.871215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.533 ms 00:23:35.032 [2024-09-29 21:54:53.871224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.871329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.871341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:23:35.032 [2024-09-29 21:54:53.871350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:35.032 [2024-09-29 21:54:53.871360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.888022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.888054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:23:35.032 [2024-09-29 21:54:53.888064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.642 ms 00:23:35.032 [2024-09-29 21:54:53.888076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.897413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.897527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:35.032 [2024-09-29 21:54:53.897542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:23:35.032 [2024-09-29 21:54:53.897550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.955287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.955337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:35.032 [2024-09-29 21:54:53.955350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.685 ms 00:23:35.032 [2024-09-29 21:54:53.955359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.955524] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:23:35.032 [2024-09-29 21:54:53.955636] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:23:35.032 [2024-09-29 21:54:53.955744] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:23:35.032 [2024-09-29 21:54:53.955850] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:23:35.032 [2024-09-29 21:54:53.955871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.955879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:23:35.032 [2024-09-29 21:54:53.955893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.452 ms 00:23:35.032 [2024-09-29 21:54:53.955901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.955958] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:23:35.032 [2024-09-29 21:54:53.955969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.955977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:23:35.032 [2024-09-29 21:54:53.955986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:23:35.032 [2024-09-29 21:54:53.955994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.970707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.970741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:23:35.032 [2024-09-29 21:54:53.970752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.691 ms 00:23:35.032 [2024-09-29 21:54:53.970760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.979035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.979063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:23:35.032 [2024-09-29 21:54:53.979074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:35.032 [2024-09-29 21:54:53.979086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.032 [2024-09-29 21:54:53.979170] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:23:35.032 [2024-09-29 21:54:53.979328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.032 [2024-09-29 21:54:53.979340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:35.032 [2024-09-29 21:54:53.979349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:23:35.032 [2024-09-29 21:54:53.979357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.597 [2024-09-29 21:54:54.405109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.597 [2024-09-29 21:54:54.405353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:35.597 [2024-09-29 21:54:54.405379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 424.978 ms 00:23:35.597 [2024-09-29 21:54:54.405403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.597 [2024-09-29 21:54:54.409288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.597 [2024-09-29 21:54:54.409327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:35.597 [2024-09-29 21:54:54.409338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.783 ms 00:23:35.597 [2024-09-29 21:54:54.409347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.597 [2024-09-29 21:54:54.409965] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:23:35.597 [2024-09-29 21:54:54.410000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.597 [2024-09-29 21:54:54.410009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:35.597 [2024-09-29 21:54:54.410018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.627 ms 00:23:35.597 [2024-09-29 21:54:54.410027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.597 [2024-09-29 21:54:54.410063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.597 [2024-09-29 21:54:54.410072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:35.597 [2024-09-29 21:54:54.410080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:35.597 [2024-09-29 21:54:54.410087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:35.597 [2024-09-29 21:54:54.410121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 430.951 ms, result 0 00:23:35.597 [2024-09-29 21:54:54.410160] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:23:35.597 [2024-09-29 21:54:54.410359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:35.597 [2024-09-29 21:54:54.410370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:35.597 [2024-09-29 21:54:54.410378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:23:35.597 [2024-09-29 21:54:54.410403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.864052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.864291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:36.163 [2024-09-29 21:54:54.864314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 452.699 ms 00:23:36.163 [2024-09-29 21:54:54.864322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.868467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.868502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:36.163 [2024-09-29 21:54:54.868513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.035 ms 00:23:36.163 [2024-09-29 21:54:54.868521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.868831] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:23:36.163 [2024-09-29 21:54:54.868870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.868879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:36.163 [2024-09-29 21:54:54.868888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:23:36.163 [2024-09-29 21:54:54.868896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.868925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.868934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:36.163 [2024-09-29 21:54:54.868942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:36.163 [2024-09-29 21:54:54.868949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.868983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 458.818 ms, result 0 00:23:36.163 [2024-09-29 21:54:54.869025] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:36.163 [2024-09-29 21:54:54.869034] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:36.163 [2024-09-29 21:54:54.869044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.869053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:23:36.163 [2024-09-29 21:54:54.869066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 889.891 ms 00:23:36.163 [2024-09-29 21:54:54.869073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.869102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.869111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:23:36.163 [2024-09-29 21:54:54.869118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:36.163 [2024-09-29 21:54:54.869127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.880737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:36.163 [2024-09-29 21:54:54.880854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.880866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:36.163 [2024-09-29 21:54:54.880875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.712 ms 00:23:36.163 [2024-09-29 21:54:54.880883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.881591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.881614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:23:36.163 [2024-09-29 21:54:54.881624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.641 ms 00:23:36.163 [2024-09-29 21:54:54.881631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.883858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.883878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:23:36.163 [2024-09-29 21:54:54.883888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.210 ms 00:23:36.163 [2024-09-29 21:54:54.883897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.883938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.883947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:23:36.163 [2024-09-29 21:54:54.883955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:36.163 [2024-09-29 21:54:54.883963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.163 [2024-09-29 21:54:54.884070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.163 [2024-09-29 21:54:54.884080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:36.163 [2024-09-29 21:54:54.884088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:23:36.164 [2024-09-29 21:54:54.884095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.164 [2024-09-29 21:54:54.884116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.164 [2024-09-29 21:54:54.884127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:36.164 [2024-09-29 21:54:54.884135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:23:36.164 [2024-09-29 21:54:54.884142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.164 [2024-09-29 21:54:54.884172] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:36.164 [2024-09-29 21:54:54.884181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.164 [2024-09-29 21:54:54.884190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:36.164 [2024-09-29 21:54:54.884198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:23:36.164 [2024-09-29 21:54:54.884206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.164 [2024-09-29 21:54:54.884259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:36.164 [2024-09-29 21:54:54.884273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:36.164 [2024-09-29 21:54:54.884282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:23:36.164 [2024-09-29 21:54:54.884290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:36.164 [2024-09-29 21:54:54.885572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1131.711 ms, result 0 00:23:36.164 [2024-09-29 21:54:54.901240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.164 [2024-09-29 21:54:54.917222] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:36.164 [2024-09-29 21:54:54.925484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:36.422 Validate MD5 checksum, iteration 1 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:36.422 21:54:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:36.422 [2024-09-29 21:54:55.255485] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:36.422 [2024-09-29 21:54:55.255725] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78424 ] 00:23:36.422 [2024-09-29 21:54:55.404968] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.680 [2024-09-29 21:54:55.580537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.090  Copying: 568/1024 [MB] (568 MBps) Copying: 1024/1024 [MB] (average 572 MBps) 00:23:40.090 00:23:40.090 21:54:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:40.090 21:54:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=3ad1aa1f3c896ec6f12579b5b25b434b 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 3ad1aa1f3c896ec6f12579b5b25b434b != \3\a\d\1\a\a\1\f\3\c\8\9\6\e\c\6\f\1\2\5\7\9\b\5\b\2\5\b\4\3\4\b ]] 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:42.049 Validate MD5 checksum, iteration 2 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:42.049 21:55:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:42.318 [2024-09-29 21:55:01.040982] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:42.318 [2024-09-29 21:55:01.041095] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78491 ] 00:23:42.318 [2024-09-29 21:55:01.188396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.576 [2024-09-29 21:55:01.327807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.086  Copying: 714/1024 [MB] (714 MBps) Copying: 1024/1024 [MB] (average 700 MBps) 00:23:45.086 00:23:45.086 21:55:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:45.086 21:55:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=461e3b04bec9f4e14d27e1f103791583 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 461e3b04bec9f4e14d27e1f103791583 != \4\6\1\e\3\b\0\4\b\e\c\9\f\4\e\1\4\d\2\7\e\1\f\1\0\3\7\9\1\5\8\3 ]] 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78391 ]] 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78391 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78391 ']' 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78391 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78391 00:23:47.655 killing process with pid 78391 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78391' 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78391 00:23:47.655 21:55:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78391 00:23:47.917 [2024-09-29 21:55:06.719045] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:23:47.917 [2024-09-29 21:55:06.730663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.730696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:47.917 [2024-09-29 21:55:06.730706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:47.917 [2024-09-29 21:55:06.730715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.730732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:47.917 [2024-09-29 21:55:06.732785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.732808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:47.917 [2024-09-29 21:55:06.732816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.043 ms 00:23:47.917 [2024-09-29 21:55:06.732823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.732995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.733003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:47.917 [2024-09-29 21:55:06.733010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.156 ms 00:23:47.917 [2024-09-29 21:55:06.733016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.734637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.734666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:47.917 [2024-09-29 21:55:06.734674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.609 ms 00:23:47.917 [2024-09-29 21:55:06.734680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.735599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.735616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:23:47.917 [2024-09-29 21:55:06.735623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.895 ms 00:23:47.917 [2024-09-29 21:55:06.735629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.743103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.743128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:47.917 [2024-09-29 21:55:06.743136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.447 ms 00:23:47.917 [2024-09-29 21:55:06.743142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.747303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.747328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:47.917 [2024-09-29 21:55:06.747336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.136 ms 00:23:47.917 [2024-09-29 21:55:06.747342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.747416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.747424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:47.917 [2024-09-29 21:55:06.747431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:23:47.917 [2024-09-29 21:55:06.747437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.754514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.754537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:23:47.917 [2024-09-29 21:55:06.754544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.065 ms 00:23:47.917 [2024-09-29 21:55:06.754549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.761592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.761699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:23:47.917 [2024-09-29 21:55:06.761710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.019 ms 00:23:47.917 [2024-09-29 21:55:06.761715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.768764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.768855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:47.917 [2024-09-29 21:55:06.768865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.026 ms 00:23:47.917 [2024-09-29 21:55:06.768870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.775635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.917 [2024-09-29 21:55:06.775728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:47.917 [2024-09-29 21:55:06.775738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.723 ms 00:23:47.917 [2024-09-29 21:55:06.775743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.917 [2024-09-29 21:55:06.775765] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:47.917 [2024-09-29 21:55:06.775775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:47.918 [2024-09-29 21:55:06.775783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:47.918 [2024-09-29 21:55:06.775789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:47.918 [2024-09-29 21:55:06.775795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:47.918 [2024-09-29 21:55:06.775882] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:47.918 [2024-09-29 21:55:06.775888] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a23b268d-30e4-4780-9e3e-707f9a289069 00:23:47.918 [2024-09-29 21:55:06.775893] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:47.918 [2024-09-29 21:55:06.775899] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:23:47.918 [2024-09-29 21:55:06.775904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:23:47.918 [2024-09-29 21:55:06.775913] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:23:47.918 [2024-09-29 21:55:06.775919] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:47.918 [2024-09-29 21:55:06.775925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:47.918 [2024-09-29 21:55:06.775930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:47.918 [2024-09-29 21:55:06.775935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:47.918 [2024-09-29 21:55:06.775940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:47.918 [2024-09-29 21:55:06.775945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.918 [2024-09-29 21:55:06.775952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:47.918 [2024-09-29 21:55:06.775959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:23:47.918 [2024-09-29 21:55:06.775965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.918 [2024-09-29 21:55:06.785650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.918 [2024-09-29 21:55:06.785674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:47.918 [2024-09-29 21:55:06.785683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.673 ms 00:23:47.918 [2024-09-29 21:55:06.785689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.918 [2024-09-29 21:55:06.785960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:47.918 [2024-09-29 21:55:06.785967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:47.918 [2024-09-29 21:55:06.785973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:23:47.918 [2024-09-29 21:55:06.785979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.918 [2024-09-29 21:55:06.815641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:47.918 [2024-09-29 21:55:06.815663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:47.918 [2024-09-29 21:55:06.815671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:47.918 [2024-09-29 21:55:06.815678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.918 [2024-09-29 21:55:06.815699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:47.918 [2024-09-29 21:55:06.815706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:47.918 [2024-09-29 21:55:06.815712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:47.918 [2024-09-29 21:55:06.815719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.918 [2024-09-29 21:55:06.815767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:47.918 [2024-09-29 21:55:06.815774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:47.918 [2024-09-29 21:55:06.815783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:47.918 [2024-09-29 21:55:06.815789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.918 [2024-09-29 21:55:06.815802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:47.918 [2024-09-29 21:55:06.815807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:47.918 [2024-09-29 21:55:06.815813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:47.918 [2024-09-29 21:55:06.815819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:47.918 [2024-09-29 21:55:06.875308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:47.918 [2024-09-29 21:55:06.875339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:47.918 [2024-09-29 21:55:06.875348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:47.918 [2024-09-29 21:55:06.875354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.180 [2024-09-29 21:55:06.924049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:48.181 [2024-09-29 21:55:06.924082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:48.181 [2024-09-29 21:55:06.924091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:48.181 [2024-09-29 21:55:06.924097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.181 [2024-09-29 21:55:06.924159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:48.181 [2024-09-29 21:55:06.924167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:48.181 [2024-09-29 21:55:06.924174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:48.181 [2024-09-29 21:55:06.924183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.181 [2024-09-29 21:55:06.924214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:48.181 [2024-09-29 21:55:06.924224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:48.181 [2024-09-29 21:55:06.924230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:48.181 [2024-09-29 21:55:06.924236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.181 [2024-09-29 21:55:06.924301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:48.181 [2024-09-29 21:55:06.924309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:48.181 [2024-09-29 21:55:06.924315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:48.181 [2024-09-29 21:55:06.924323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.181 [2024-09-29 21:55:06.924353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:48.181 [2024-09-29 21:55:06.924360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:48.181 [2024-09-29 21:55:06.924366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:48.181 [2024-09-29 21:55:06.924372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.181 [2024-09-29 21:55:06.924420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:48.181 [2024-09-29 21:55:06.924428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:48.181 [2024-09-29 21:55:06.924434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:48.181 [2024-09-29 21:55:06.924440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.181 [2024-09-29 21:55:06.924474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:48.181 [2024-09-29 21:55:06.924481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:48.181 [2024-09-29 21:55:06.924487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:48.181 [2024-09-29 21:55:06.924492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:48.181 [2024-09-29 21:55:06.924581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 193.895 ms, result 0 00:23:48.752 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:48.753 Remove shared memory files 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78162 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:48.753 ************************************ 00:23:48.753 END TEST ftl_upgrade_shutdown 00:23:48.753 ************************************ 00:23:48.753 00:23:48.753 real 1m19.848s 00:23:48.753 user 1m50.120s 00:23:48.753 sys 0m19.423s 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:48.753 21:55:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:48.753 21:55:07 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:23:48.753 21:55:07 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:48.753 21:55:07 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:23:48.753 21:55:07 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:48.753 21:55:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:48.753 ************************************ 00:23:48.753 START TEST ftl_restore_fast 00:23:48.753 ************************************ 00:23:48.753 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:49.014 * Looking for test storage... 00:23:49.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:49.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.014 --rc genhtml_branch_coverage=1 00:23:49.014 --rc genhtml_function_coverage=1 00:23:49.014 --rc genhtml_legend=1 00:23:49.014 --rc geninfo_all_blocks=1 00:23:49.014 --rc geninfo_unexecuted_blocks=1 00:23:49.014 00:23:49.014 ' 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:49.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.014 --rc genhtml_branch_coverage=1 00:23:49.014 --rc genhtml_function_coverage=1 00:23:49.014 --rc genhtml_legend=1 00:23:49.014 --rc geninfo_all_blocks=1 00:23:49.014 --rc geninfo_unexecuted_blocks=1 00:23:49.014 00:23:49.014 ' 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:49.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.014 --rc genhtml_branch_coverage=1 00:23:49.014 --rc genhtml_function_coverage=1 00:23:49.014 --rc genhtml_legend=1 00:23:49.014 --rc geninfo_all_blocks=1 00:23:49.014 --rc geninfo_unexecuted_blocks=1 00:23:49.014 00:23:49.014 ' 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:49.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.014 --rc genhtml_branch_coverage=1 00:23:49.014 --rc genhtml_function_coverage=1 00:23:49.014 --rc genhtml_legend=1 00:23:49.014 --rc geninfo_all_blocks=1 00:23:49.014 --rc geninfo_unexecuted_blocks=1 00:23:49.014 00:23:49.014 ' 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:49.014 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.FVXM5t2MsM 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=78638 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 78638 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 78638 ']' 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.015 21:55:07 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:23:49.015 [2024-09-29 21:55:07.938287] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:49.015 [2024-09-29 21:55:07.938491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78638 ] 00:23:49.276 [2024-09-29 21:55:08.083708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.536 [2024-09-29 21:55:08.293573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.104 21:55:08 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.104 21:55:08 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:23:50.104 21:55:08 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:50.104 21:55:08 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:23:50.104 21:55:08 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:50.104 21:55:08 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:23:50.104 21:55:08 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:23:50.104 21:55:08 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:50.363 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:50.363 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:23:50.363 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:50.363 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:50.363 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:50.363 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:50.363 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:50.363 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:50.622 { 00:23:50.622 "name": "nvme0n1", 00:23:50.622 "aliases": [ 00:23:50.622 "61d23bb1-f133-4c7d-8f7e-6818d452cb42" 00:23:50.622 ], 00:23:50.622 "product_name": "NVMe disk", 00:23:50.622 "block_size": 4096, 00:23:50.622 "num_blocks": 1310720, 00:23:50.622 "uuid": "61d23bb1-f133-4c7d-8f7e-6818d452cb42", 00:23:50.622 "numa_id": -1, 00:23:50.622 "assigned_rate_limits": { 00:23:50.622 "rw_ios_per_sec": 0, 00:23:50.622 "rw_mbytes_per_sec": 0, 00:23:50.622 "r_mbytes_per_sec": 0, 00:23:50.622 "w_mbytes_per_sec": 0 00:23:50.622 }, 00:23:50.622 "claimed": true, 00:23:50.622 "claim_type": "read_many_write_one", 00:23:50.622 "zoned": false, 00:23:50.622 "supported_io_types": { 00:23:50.622 "read": true, 00:23:50.622 "write": true, 00:23:50.622 "unmap": true, 00:23:50.622 "flush": true, 00:23:50.622 "reset": true, 00:23:50.622 "nvme_admin": true, 00:23:50.622 "nvme_io": true, 00:23:50.622 "nvme_io_md": false, 00:23:50.622 "write_zeroes": true, 00:23:50.622 "zcopy": false, 00:23:50.622 "get_zone_info": false, 00:23:50.622 "zone_management": false, 00:23:50.622 "zone_append": false, 00:23:50.622 "compare": true, 00:23:50.622 "compare_and_write": false, 00:23:50.622 "abort": true, 00:23:50.622 "seek_hole": false, 00:23:50.622 "seek_data": false, 00:23:50.622 "copy": true, 00:23:50.622 "nvme_iov_md": false 00:23:50.622 }, 00:23:50.622 "driver_specific": { 00:23:50.622 "nvme": [ 00:23:50.622 { 00:23:50.622 "pci_address": "0000:00:11.0", 00:23:50.622 "trid": { 00:23:50.622 "trtype": "PCIe", 00:23:50.622 "traddr": "0000:00:11.0" 00:23:50.622 }, 00:23:50.622 "ctrlr_data": { 00:23:50.622 "cntlid": 0, 00:23:50.622 "vendor_id": "0x1b36", 00:23:50.622 "model_number": "QEMU NVMe Ctrl", 00:23:50.622 "serial_number": "12341", 00:23:50.622 "firmware_revision": "8.0.0", 00:23:50.622 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:50.622 "oacs": { 00:23:50.622 "security": 0, 00:23:50.622 "format": 1, 00:23:50.622 "firmware": 0, 00:23:50.622 "ns_manage": 1 00:23:50.622 }, 00:23:50.622 "multi_ctrlr": false, 00:23:50.622 "ana_reporting": false 00:23:50.622 }, 00:23:50.622 "vs": { 00:23:50.622 "nvme_version": "1.4" 00:23:50.622 }, 00:23:50.622 "ns_data": { 00:23:50.622 "id": 1, 00:23:50.622 "can_share": false 00:23:50.622 } 00:23:50.622 } 00:23:50.622 ], 00:23:50.622 "mp_policy": "active_passive" 00:23:50.622 } 00:23:50.622 } 00:23:50.622 ]' 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:50.622 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:50.880 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=a272967c-d4d9-49f2-aa3a-8aef27b202b8 00:23:50.880 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:23:50.880 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a272967c-d4d9-49f2-aa3a-8aef27b202b8 00:23:51.140 21:55:09 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:51.402 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=e42eb667-b3f3-4a89-a50a-6c73e49bb37c 00:23:51.402 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e42eb667-b3f3-4a89-a50a-6c73e49bb37c 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:51.663 { 00:23:51.663 "name": "08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f", 00:23:51.663 "aliases": [ 00:23:51.663 "lvs/nvme0n1p0" 00:23:51.663 ], 00:23:51.663 "product_name": "Logical Volume", 00:23:51.663 "block_size": 4096, 00:23:51.663 "num_blocks": 26476544, 00:23:51.663 "uuid": "08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f", 00:23:51.663 "assigned_rate_limits": { 00:23:51.663 "rw_ios_per_sec": 0, 00:23:51.663 "rw_mbytes_per_sec": 0, 00:23:51.663 "r_mbytes_per_sec": 0, 00:23:51.663 "w_mbytes_per_sec": 0 00:23:51.663 }, 00:23:51.663 "claimed": false, 00:23:51.663 "zoned": false, 00:23:51.663 "supported_io_types": { 00:23:51.663 "read": true, 00:23:51.663 "write": true, 00:23:51.663 "unmap": true, 00:23:51.663 "flush": false, 00:23:51.663 "reset": true, 00:23:51.663 "nvme_admin": false, 00:23:51.663 "nvme_io": false, 00:23:51.663 "nvme_io_md": false, 00:23:51.663 "write_zeroes": true, 00:23:51.663 "zcopy": false, 00:23:51.663 "get_zone_info": false, 00:23:51.663 "zone_management": false, 00:23:51.663 "zone_append": false, 00:23:51.663 "compare": false, 00:23:51.663 "compare_and_write": false, 00:23:51.663 "abort": false, 00:23:51.663 "seek_hole": true, 00:23:51.663 "seek_data": true, 00:23:51.663 "copy": false, 00:23:51.663 "nvme_iov_md": false 00:23:51.663 }, 00:23:51.663 "driver_specific": { 00:23:51.663 "lvol": { 00:23:51.663 "lvol_store_uuid": "e42eb667-b3f3-4a89-a50a-6c73e49bb37c", 00:23:51.663 "base_bdev": "nvme0n1", 00:23:51.663 "thin_provision": true, 00:23:51.663 "num_allocated_clusters": 0, 00:23:51.663 "snapshot": false, 00:23:51.663 "clone": false, 00:23:51.663 "esnap_clone": false 00:23:51.663 } 00:23:51.663 } 00:23:51.663 } 00:23:51.663 ]' 00:23:51.663 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:51.923 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:51.923 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:51.923 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:51.923 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:51.923 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:51.923 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:23:51.923 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:23:51.923 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:52.182 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:52.182 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:52.182 21:55:10 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:52.182 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:52.182 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:52.182 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:52.182 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:52.182 21:55:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:52.182 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:52.182 { 00:23:52.182 "name": "08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f", 00:23:52.182 "aliases": [ 00:23:52.182 "lvs/nvme0n1p0" 00:23:52.182 ], 00:23:52.182 "product_name": "Logical Volume", 00:23:52.182 "block_size": 4096, 00:23:52.182 "num_blocks": 26476544, 00:23:52.182 "uuid": "08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f", 00:23:52.182 "assigned_rate_limits": { 00:23:52.182 "rw_ios_per_sec": 0, 00:23:52.182 "rw_mbytes_per_sec": 0, 00:23:52.182 "r_mbytes_per_sec": 0, 00:23:52.182 "w_mbytes_per_sec": 0 00:23:52.182 }, 00:23:52.182 "claimed": false, 00:23:52.182 "zoned": false, 00:23:52.182 "supported_io_types": { 00:23:52.182 "read": true, 00:23:52.182 "write": true, 00:23:52.182 "unmap": true, 00:23:52.182 "flush": false, 00:23:52.182 "reset": true, 00:23:52.182 "nvme_admin": false, 00:23:52.182 "nvme_io": false, 00:23:52.182 "nvme_io_md": false, 00:23:52.182 "write_zeroes": true, 00:23:52.182 "zcopy": false, 00:23:52.182 "get_zone_info": false, 00:23:52.182 "zone_management": false, 00:23:52.182 "zone_append": false, 00:23:52.182 "compare": false, 00:23:52.182 "compare_and_write": false, 00:23:52.182 "abort": false, 00:23:52.182 "seek_hole": true, 00:23:52.182 "seek_data": true, 00:23:52.182 "copy": false, 00:23:52.182 "nvme_iov_md": false 00:23:52.182 }, 00:23:52.182 "driver_specific": { 00:23:52.182 "lvol": { 00:23:52.182 "lvol_store_uuid": "e42eb667-b3f3-4a89-a50a-6c73e49bb37c", 00:23:52.182 "base_bdev": "nvme0n1", 00:23:52.182 "thin_provision": true, 00:23:52.182 "num_allocated_clusters": 0, 00:23:52.182 "snapshot": false, 00:23:52.182 "clone": false, 00:23:52.182 "esnap_clone": false 00:23:52.182 } 00:23:52.182 } 00:23:52.182 } 00:23:52.182 ]' 00:23:52.182 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:52.440 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:52.699 { 00:23:52.699 "name": "08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f", 00:23:52.699 "aliases": [ 00:23:52.699 "lvs/nvme0n1p0" 00:23:52.699 ], 00:23:52.699 "product_name": "Logical Volume", 00:23:52.699 "block_size": 4096, 00:23:52.699 "num_blocks": 26476544, 00:23:52.699 "uuid": "08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f", 00:23:52.699 "assigned_rate_limits": { 00:23:52.699 "rw_ios_per_sec": 0, 00:23:52.699 "rw_mbytes_per_sec": 0, 00:23:52.699 "r_mbytes_per_sec": 0, 00:23:52.699 "w_mbytes_per_sec": 0 00:23:52.699 }, 00:23:52.699 "claimed": false, 00:23:52.699 "zoned": false, 00:23:52.699 "supported_io_types": { 00:23:52.699 "read": true, 00:23:52.699 "write": true, 00:23:52.699 "unmap": true, 00:23:52.699 "flush": false, 00:23:52.699 "reset": true, 00:23:52.699 "nvme_admin": false, 00:23:52.699 "nvme_io": false, 00:23:52.699 "nvme_io_md": false, 00:23:52.699 "write_zeroes": true, 00:23:52.699 "zcopy": false, 00:23:52.699 "get_zone_info": false, 00:23:52.699 "zone_management": false, 00:23:52.699 "zone_append": false, 00:23:52.699 "compare": false, 00:23:52.699 "compare_and_write": false, 00:23:52.699 "abort": false, 00:23:52.699 "seek_hole": true, 00:23:52.699 "seek_data": true, 00:23:52.699 "copy": false, 00:23:52.699 "nvme_iov_md": false 00:23:52.699 }, 00:23:52.699 "driver_specific": { 00:23:52.699 "lvol": { 00:23:52.699 "lvol_store_uuid": "e42eb667-b3f3-4a89-a50a-6c73e49bb37c", 00:23:52.699 "base_bdev": "nvme0n1", 00:23:52.699 "thin_provision": true, 00:23:52.699 "num_allocated_clusters": 0, 00:23:52.699 "snapshot": false, 00:23:52.699 "clone": false, 00:23:52.699 "esnap_clone": false 00:23:52.699 } 00:23:52.699 } 00:23:52.699 } 00:23:52.699 ]' 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f --l2p_dram_limit 10' 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:23:52.699 21:55:11 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 08f0c0ea-d7c9-4e84-8bd8-cc07bed8727f --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:23:52.959 [2024-09-29 21:55:11.849362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.849415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:52.959 [2024-09-29 21:55:11.849429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:52.959 [2024-09-29 21:55:11.849436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.849478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.849486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:52.959 [2024-09-29 21:55:11.849495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:52.959 [2024-09-29 21:55:11.849501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.849523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:52.959 [2024-09-29 21:55:11.850099] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:52.959 [2024-09-29 21:55:11.850124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.850131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:52.959 [2024-09-29 21:55:11.850139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:23:52.959 [2024-09-29 21:55:11.850147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.850198] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6a175fc5-6fdf-49f1-b3d6-57d4ea920dff 00:23:52.959 [2024-09-29 21:55:11.851480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.851511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:52.959 [2024-09-29 21:55:11.851521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:52.959 [2024-09-29 21:55:11.851531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.858462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.858488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:52.959 [2024-09-29 21:55:11.858496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.893 ms 00:23:52.959 [2024-09-29 21:55:11.858504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.858574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.858583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:52.959 [2024-09-29 21:55:11.858590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:52.959 [2024-09-29 21:55:11.858602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.858637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.858647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:52.959 [2024-09-29 21:55:11.858654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:52.959 [2024-09-29 21:55:11.858661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.858678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:52.959 [2024-09-29 21:55:11.861956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.861979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:52.959 [2024-09-29 21:55:11.861988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.280 ms 00:23:52.959 [2024-09-29 21:55:11.861995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.862023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.862029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:52.959 [2024-09-29 21:55:11.862037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:52.959 [2024-09-29 21:55:11.862044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.862065] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:52.959 [2024-09-29 21:55:11.862175] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:52.959 [2024-09-29 21:55:11.862189] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:52.959 [2024-09-29 21:55:11.862198] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:52.959 [2024-09-29 21:55:11.862209] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:52.959 [2024-09-29 21:55:11.862216] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:52.959 [2024-09-29 21:55:11.862224] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:52.959 [2024-09-29 21:55:11.862230] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:52.959 [2024-09-29 21:55:11.862238] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:52.959 [2024-09-29 21:55:11.862244] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:52.959 [2024-09-29 21:55:11.862252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.862262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:52.959 [2024-09-29 21:55:11.862280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:23:52.959 [2024-09-29 21:55:11.862286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.862353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.959 [2024-09-29 21:55:11.862362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:52.959 [2024-09-29 21:55:11.862370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:52.959 [2024-09-29 21:55:11.862376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.959 [2024-09-29 21:55:11.862461] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:52.959 [2024-09-29 21:55:11.862470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:52.959 [2024-09-29 21:55:11.862478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.959 [2024-09-29 21:55:11.862484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.959 [2024-09-29 21:55:11.862492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:52.959 [2024-09-29 21:55:11.862497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:52.959 [2024-09-29 21:55:11.862504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:52.959 [2024-09-29 21:55:11.862509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:52.959 [2024-09-29 21:55:11.862517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:52.959 [2024-09-29 21:55:11.862522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.959 [2024-09-29 21:55:11.862532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:52.959 [2024-09-29 21:55:11.862539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:52.959 [2024-09-29 21:55:11.862546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.959 [2024-09-29 21:55:11.862551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:52.959 [2024-09-29 21:55:11.862558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:52.959 [2024-09-29 21:55:11.862563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.959 [2024-09-29 21:55:11.862572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:52.959 [2024-09-29 21:55:11.862577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:52.959 [2024-09-29 21:55:11.862583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.959 [2024-09-29 21:55:11.862588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:52.959 [2024-09-29 21:55:11.862596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:52.959 [2024-09-29 21:55:11.862601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.959 [2024-09-29 21:55:11.862608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:52.959 [2024-09-29 21:55:11.862613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:52.959 [2024-09-29 21:55:11.862619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.959 [2024-09-29 21:55:11.862625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:52.959 [2024-09-29 21:55:11.862631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:52.959 [2024-09-29 21:55:11.862635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.960 [2024-09-29 21:55:11.862642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:52.960 [2024-09-29 21:55:11.862648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:52.960 [2024-09-29 21:55:11.862655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.960 [2024-09-29 21:55:11.862660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:52.960 [2024-09-29 21:55:11.862668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:52.960 [2024-09-29 21:55:11.862673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.960 [2024-09-29 21:55:11.862679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:52.960 [2024-09-29 21:55:11.862684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:52.960 [2024-09-29 21:55:11.862690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.960 [2024-09-29 21:55:11.862696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:52.960 [2024-09-29 21:55:11.862702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:52.960 [2024-09-29 21:55:11.862707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.960 [2024-09-29 21:55:11.862714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:52.960 [2024-09-29 21:55:11.862719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:52.960 [2024-09-29 21:55:11.862726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.960 [2024-09-29 21:55:11.862731] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:52.960 [2024-09-29 21:55:11.862739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:52.960 [2024-09-29 21:55:11.862746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.960 [2024-09-29 21:55:11.862754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.960 [2024-09-29 21:55:11.862761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:52.960 [2024-09-29 21:55:11.862770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:52.960 [2024-09-29 21:55:11.862775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:52.960 [2024-09-29 21:55:11.862782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:52.960 [2024-09-29 21:55:11.862787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:52.960 [2024-09-29 21:55:11.862794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:52.960 [2024-09-29 21:55:11.862802] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:52.960 [2024-09-29 21:55:11.862811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.960 [2024-09-29 21:55:11.862818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:52.960 [2024-09-29 21:55:11.862825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:52.960 [2024-09-29 21:55:11.862830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:52.960 [2024-09-29 21:55:11.862837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:52.960 [2024-09-29 21:55:11.862843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:52.960 [2024-09-29 21:55:11.862850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:52.960 [2024-09-29 21:55:11.862855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:52.960 [2024-09-29 21:55:11.862862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:52.960 [2024-09-29 21:55:11.862867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:52.960 [2024-09-29 21:55:11.862875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:52.960 [2024-09-29 21:55:11.862881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:52.960 [2024-09-29 21:55:11.862889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:52.960 [2024-09-29 21:55:11.862894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:52.960 [2024-09-29 21:55:11.862900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:52.960 [2024-09-29 21:55:11.862906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:52.960 [2024-09-29 21:55:11.862915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.960 [2024-09-29 21:55:11.862921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:52.960 [2024-09-29 21:55:11.862928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:52.960 [2024-09-29 21:55:11.862933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:52.960 [2024-09-29 21:55:11.862942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:52.960 [2024-09-29 21:55:11.862949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.960 [2024-09-29 21:55:11.862956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:52.960 [2024-09-29 21:55:11.862962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:23:52.960 [2024-09-29 21:55:11.862969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.960 [2024-09-29 21:55:11.863011] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:52.960 [2024-09-29 21:55:11.863023] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:57.166 [2024-09-29 21:55:15.490505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.490602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:57.166 [2024-09-29 21:55:15.490624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3627.476 ms 00:23:57.166 [2024-09-29 21:55:15.490636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.528252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.528329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:57.166 [2024-09-29 21:55:15.528347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.364 ms 00:23:57.166 [2024-09-29 21:55:15.528360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.528537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.528556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:57.166 [2024-09-29 21:55:15.528566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:57.166 [2024-09-29 21:55:15.528587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.583654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.584123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:57.166 [2024-09-29 21:55:15.584182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.022 ms 00:23:57.166 [2024-09-29 21:55:15.584218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.584317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.584351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:57.166 [2024-09-29 21:55:15.584376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:57.166 [2024-09-29 21:55:15.584459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.585495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.585575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:57.166 [2024-09-29 21:55:15.585605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:23:57.166 [2024-09-29 21:55:15.585644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.585941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.585982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:57.166 [2024-09-29 21:55:15.586005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:23:57.166 [2024-09-29 21:55:15.586035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.608233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.608287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:57.166 [2024-09-29 21:55:15.608299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.151 ms 00:23:57.166 [2024-09-29 21:55:15.608311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.623992] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:57.166 [2024-09-29 21:55:15.629197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.629246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:57.166 [2024-09-29 21:55:15.629264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.761 ms 00:23:57.166 [2024-09-29 21:55:15.629273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.722947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.723180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:57.166 [2024-09-29 21:55:15.723216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.634 ms 00:23:57.166 [2024-09-29 21:55:15.723226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.723475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.723491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:57.166 [2024-09-29 21:55:15.723508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:23:57.166 [2024-09-29 21:55:15.723518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.750791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.750850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:57.166 [2024-09-29 21:55:15.750867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.210 ms 00:23:57.166 [2024-09-29 21:55:15.750876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.777026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.777091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:57.166 [2024-09-29 21:55:15.777107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.085 ms 00:23:57.166 [2024-09-29 21:55:15.777117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.777815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.166 [2024-09-29 21:55:15.777841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:57.166 [2024-09-29 21:55:15.777855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:23:57.166 [2024-09-29 21:55:15.777864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.166 [2024-09-29 21:55:15.870885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.167 [2024-09-29 21:55:15.870938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:57.167 [2024-09-29 21:55:15.870959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.952 ms 00:23:57.167 [2024-09-29 21:55:15.870971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.167 [2024-09-29 21:55:15.900449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.167 [2024-09-29 21:55:15.900660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:57.167 [2024-09-29 21:55:15.900692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.368 ms 00:23:57.167 [2024-09-29 21:55:15.900702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.167 [2024-09-29 21:55:15.927487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.167 [2024-09-29 21:55:15.927679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:57.167 [2024-09-29 21:55:15.927707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.731 ms 00:23:57.167 [2024-09-29 21:55:15.927716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.167 [2024-09-29 21:55:15.955240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.167 [2024-09-29 21:55:15.955466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:57.167 [2024-09-29 21:55:15.955497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.469 ms 00:23:57.167 [2024-09-29 21:55:15.955507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.167 [2024-09-29 21:55:15.955566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.167 [2024-09-29 21:55:15.955577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:57.167 [2024-09-29 21:55:15.955599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:57.167 [2024-09-29 21:55:15.955610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.167 [2024-09-29 21:55:15.955731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.167 [2024-09-29 21:55:15.955745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:57.167 [2024-09-29 21:55:15.955757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:57.167 [2024-09-29 21:55:15.955765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.167 [2024-09-29 21:55:15.957213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4107.231 ms, result 0 00:23:57.167 { 00:23:57.167 "name": "ftl0", 00:23:57.167 "uuid": "6a175fc5-6fdf-49f1-b3d6-57d4ea920dff" 00:23:57.167 } 00:23:57.167 21:55:15 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:57.167 21:55:15 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:57.428 21:55:16 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:23:57.428 21:55:16 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:57.689 [2024-09-29 21:55:16.416324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.689 [2024-09-29 21:55:16.416410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:57.689 [2024-09-29 21:55:16.416426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:57.689 [2024-09-29 21:55:16.416439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.689 [2024-09-29 21:55:16.416467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:57.689 [2024-09-29 21:55:16.419877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.689 [2024-09-29 21:55:16.419925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:57.689 [2024-09-29 21:55:16.419954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.386 ms 00:23:57.689 [2024-09-29 21:55:16.419963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.689 [2024-09-29 21:55:16.420271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.689 [2024-09-29 21:55:16.420285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:57.689 [2024-09-29 21:55:16.420300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:23:57.689 [2024-09-29 21:55:16.420311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.689 [2024-09-29 21:55:16.423609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.689 [2024-09-29 21:55:16.423638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:57.689 [2024-09-29 21:55:16.423650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:23:57.689 [2024-09-29 21:55:16.423663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.689 [2024-09-29 21:55:16.429900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.689 [2024-09-29 21:55:16.429941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:57.689 [2024-09-29 21:55:16.429957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.212 ms 00:23:57.689 [2024-09-29 21:55:16.429967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.689 [2024-09-29 21:55:16.456866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.690 [2024-09-29 21:55:16.456918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:57.690 [2024-09-29 21:55:16.456934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.815 ms 00:23:57.690 [2024-09-29 21:55:16.456943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.690 [2024-09-29 21:55:16.476068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.690 [2024-09-29 21:55:16.476121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:57.690 [2024-09-29 21:55:16.476138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.056 ms 00:23:57.690 [2024-09-29 21:55:16.476146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.690 [2024-09-29 21:55:16.476332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.690 [2024-09-29 21:55:16.476350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:57.690 [2024-09-29 21:55:16.476363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:23:57.690 [2024-09-29 21:55:16.476372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.690 [2024-09-29 21:55:16.503200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.690 [2024-09-29 21:55:16.503249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:57.690 [2024-09-29 21:55:16.503265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.783 ms 00:23:57.690 [2024-09-29 21:55:16.503273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.690 [2024-09-29 21:55:16.529430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.690 [2024-09-29 21:55:16.529483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:57.690 [2024-09-29 21:55:16.529498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.097 ms 00:23:57.690 [2024-09-29 21:55:16.529505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.690 [2024-09-29 21:55:16.551263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.690 [2024-09-29 21:55:16.551477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:57.690 [2024-09-29 21:55:16.551502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.694 ms 00:23:57.690 [2024-09-29 21:55:16.551509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.690 [2024-09-29 21:55:16.571497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.690 [2024-09-29 21:55:16.571535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:57.690 [2024-09-29 21:55:16.571548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.745 ms 00:23:57.690 [2024-09-29 21:55:16.571554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.690 [2024-09-29 21:55:16.571597] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:57.690 [2024-09-29 21:55:16.571612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.571997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:57.690 [2024-09-29 21:55:16.572103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:57.691 [2024-09-29 21:55:16.572381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:57.691 [2024-09-29 21:55:16.572411] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a175fc5-6fdf-49f1-b3d6-57d4ea920dff 00:23:57.691 [2024-09-29 21:55:16.572418] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:57.691 [2024-09-29 21:55:16.572428] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:57.691 [2024-09-29 21:55:16.572435] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:57.691 [2024-09-29 21:55:16.572443] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:57.691 [2024-09-29 21:55:16.572449] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:57.691 [2024-09-29 21:55:16.572458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:57.691 [2024-09-29 21:55:16.572466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:57.691 [2024-09-29 21:55:16.572474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:57.691 [2024-09-29 21:55:16.572479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:57.691 [2024-09-29 21:55:16.572486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.691 [2024-09-29 21:55:16.572494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:57.691 [2024-09-29 21:55:16.572505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:23:57.691 [2024-09-29 21:55:16.572513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.691 [2024-09-29 21:55:16.583682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.691 [2024-09-29 21:55:16.583717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:57.691 [2024-09-29 21:55:16.583730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.132 ms 00:23:57.691 [2024-09-29 21:55:16.583737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.691 [2024-09-29 21:55:16.584075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.691 [2024-09-29 21:55:16.584093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:57.691 [2024-09-29 21:55:16.584104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:23:57.691 [2024-09-29 21:55:16.584112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.691 [2024-09-29 21:55:16.618427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.691 [2024-09-29 21:55:16.618564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:57.691 [2024-09-29 21:55:16.618582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.691 [2024-09-29 21:55:16.618591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.691 [2024-09-29 21:55:16.618650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.691 [2024-09-29 21:55:16.618658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:57.691 [2024-09-29 21:55:16.618666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.691 [2024-09-29 21:55:16.618673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.691 [2024-09-29 21:55:16.618746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.691 [2024-09-29 21:55:16.618756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:57.691 [2024-09-29 21:55:16.618765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.691 [2024-09-29 21:55:16.618771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.691 [2024-09-29 21:55:16.618793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.691 [2024-09-29 21:55:16.618799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:57.691 [2024-09-29 21:55:16.618809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.691 [2024-09-29 21:55:16.618815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.950 [2024-09-29 21:55:16.682604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.950 [2024-09-29 21:55:16.682639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:57.950 [2024-09-29 21:55:16.682649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.950 [2024-09-29 21:55:16.682656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.950 [2024-09-29 21:55:16.733966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.950 [2024-09-29 21:55:16.734127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:57.950 [2024-09-29 21:55:16.734143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.950 [2024-09-29 21:55:16.734151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.950 [2024-09-29 21:55:16.734224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.950 [2024-09-29 21:55:16.734231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:57.950 [2024-09-29 21:55:16.734240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.950 [2024-09-29 21:55:16.734246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.950 [2024-09-29 21:55:16.734312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.950 [2024-09-29 21:55:16.734323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:57.950 [2024-09-29 21:55:16.734332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.950 [2024-09-29 21:55:16.734338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.950 [2024-09-29 21:55:16.734434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.950 [2024-09-29 21:55:16.734443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:57.950 [2024-09-29 21:55:16.734451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.950 [2024-09-29 21:55:16.734458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.950 [2024-09-29 21:55:16.734488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.950 [2024-09-29 21:55:16.734495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:57.950 [2024-09-29 21:55:16.734505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.950 [2024-09-29 21:55:16.734511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.950 [2024-09-29 21:55:16.734547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.950 [2024-09-29 21:55:16.734556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:57.950 [2024-09-29 21:55:16.734564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.950 [2024-09-29 21:55:16.734570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.950 [2024-09-29 21:55:16.734615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.950 [2024-09-29 21:55:16.734625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:57.950 [2024-09-29 21:55:16.734633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.950 [2024-09-29 21:55:16.734639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.950 [2024-09-29 21:55:16.734760] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 318.411 ms, result 0 00:23:57.950 true 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 78638 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78638 ']' 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78638 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78638 00:23:57.950 killing process with pid 78638 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78638' 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 78638 00:23:57.950 21:55:16 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 78638 00:24:06.087 21:55:24 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:09.381 262144+0 records in 00:24:09.381 262144+0 records out 00:24:09.381 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.71797 s, 289 MB/s 00:24:09.381 21:55:27 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:11.295 21:55:29 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:11.295 [2024-09-29 21:55:30.004683] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:11.295 [2024-09-29 21:55:30.004805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78867 ] 00:24:11.295 [2024-09-29 21:55:30.155444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.556 [2024-09-29 21:55:30.349904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.816 [2024-09-29 21:55:30.632789] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.816 [2024-09-29 21:55:30.633121] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:11.816 [2024-09-29 21:55:30.791287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.816 [2024-09-29 21:55:30.791459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:11.816 [2024-09-29 21:55:30.791478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:11.816 [2024-09-29 21:55:30.791493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.816 [2024-09-29 21:55:30.791542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.816 [2024-09-29 21:55:30.791552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:11.816 [2024-09-29 21:55:30.791560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:11.816 [2024-09-29 21:55:30.791567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.816 [2024-09-29 21:55:30.791586] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:11.816 [2024-09-29 21:55:30.792216] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:11.816 [2024-09-29 21:55:30.792231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.816 [2024-09-29 21:55:30.792238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:11.816 [2024-09-29 21:55:30.792246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:24:11.816 [2024-09-29 21:55:30.792253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.816 [2024-09-29 21:55:30.793325] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:12.076 [2024-09-29 21:55:30.806096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.076 [2024-09-29 21:55:30.806129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:12.076 [2024-09-29 21:55:30.806148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.773 ms 00:24:12.076 [2024-09-29 21:55:30.806156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.076 [2024-09-29 21:55:30.806215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.076 [2024-09-29 21:55:30.806224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:12.076 [2024-09-29 21:55:30.806237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:12.076 [2024-09-29 21:55:30.806247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.076 [2024-09-29 21:55:30.811309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.076 [2024-09-29 21:55:30.811447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:12.076 [2024-09-29 21:55:30.811464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.985 ms 00:24:12.076 [2024-09-29 21:55:30.811471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.076 [2024-09-29 21:55:30.811541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.076 [2024-09-29 21:55:30.811549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:12.076 [2024-09-29 21:55:30.811557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:12.076 [2024-09-29 21:55:30.811564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.076 [2024-09-29 21:55:30.811611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.076 [2024-09-29 21:55:30.811621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:12.076 [2024-09-29 21:55:30.811629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:12.076 [2024-09-29 21:55:30.811636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.076 [2024-09-29 21:55:30.811656] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:12.076 [2024-09-29 21:55:30.815066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.076 [2024-09-29 21:55:30.815182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:12.076 [2024-09-29 21:55:30.815198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:24:12.076 [2024-09-29 21:55:30.815206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.076 [2024-09-29 21:55:30.815237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.076 [2024-09-29 21:55:30.815245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:12.076 [2024-09-29 21:55:30.815253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:12.076 [2024-09-29 21:55:30.815260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.077 [2024-09-29 21:55:30.815283] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:12.077 [2024-09-29 21:55:30.815300] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:12.077 [2024-09-29 21:55:30.815334] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:12.077 [2024-09-29 21:55:30.815349] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:12.077 [2024-09-29 21:55:30.815468] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:12.077 [2024-09-29 21:55:30.815479] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:12.077 [2024-09-29 21:55:30.815489] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:12.077 [2024-09-29 21:55:30.815501] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:12.077 [2024-09-29 21:55:30.815510] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:12.077 [2024-09-29 21:55:30.815518] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:12.077 [2024-09-29 21:55:30.815525] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:12.077 [2024-09-29 21:55:30.815532] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:12.077 [2024-09-29 21:55:30.815539] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:12.077 [2024-09-29 21:55:30.815547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.077 [2024-09-29 21:55:30.815554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:12.077 [2024-09-29 21:55:30.815561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:24:12.077 [2024-09-29 21:55:30.815568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.077 [2024-09-29 21:55:30.815649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.077 [2024-09-29 21:55:30.815660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:12.077 [2024-09-29 21:55:30.815667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:12.077 [2024-09-29 21:55:30.815674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.077 [2024-09-29 21:55:30.815786] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:12.077 [2024-09-29 21:55:30.815796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:12.077 [2024-09-29 21:55:30.815804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:12.077 [2024-09-29 21:55:30.815812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.077 [2024-09-29 21:55:30.815819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:12.077 [2024-09-29 21:55:30.815825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:12.077 [2024-09-29 21:55:30.815832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:12.077 [2024-09-29 21:55:30.815839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:12.077 [2024-09-29 21:55:30.815848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:12.077 [2024-09-29 21:55:30.815854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:12.077 [2024-09-29 21:55:30.815861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:12.077 [2024-09-29 21:55:30.815868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:12.077 [2024-09-29 21:55:30.815875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:12.077 [2024-09-29 21:55:30.815886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:12.077 [2024-09-29 21:55:30.815893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:12.077 [2024-09-29 21:55:30.815899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.077 [2024-09-29 21:55:30.815905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:12.077 [2024-09-29 21:55:30.815912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:12.077 [2024-09-29 21:55:30.815919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.077 [2024-09-29 21:55:30.815925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:12.077 [2024-09-29 21:55:30.815932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:12.077 [2024-09-29 21:55:30.815939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.077 [2024-09-29 21:55:30.815945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:12.077 [2024-09-29 21:55:30.815951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:12.077 [2024-09-29 21:55:30.815957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.077 [2024-09-29 21:55:30.815964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:12.077 [2024-09-29 21:55:30.815970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:12.077 [2024-09-29 21:55:30.815977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.077 [2024-09-29 21:55:30.815983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:12.077 [2024-09-29 21:55:30.815989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:12.077 [2024-09-29 21:55:30.815995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.077 [2024-09-29 21:55:30.816002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:12.077 [2024-09-29 21:55:30.816008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:12.077 [2024-09-29 21:55:30.816014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:12.077 [2024-09-29 21:55:30.816021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:12.077 [2024-09-29 21:55:30.816027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:12.077 [2024-09-29 21:55:30.816034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:12.077 [2024-09-29 21:55:30.816041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:12.077 [2024-09-29 21:55:30.816047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:12.077 [2024-09-29 21:55:30.816053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.077 [2024-09-29 21:55:30.816060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:12.077 [2024-09-29 21:55:30.816066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:12.077 [2024-09-29 21:55:30.816072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.077 [2024-09-29 21:55:30.816082] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:12.077 [2024-09-29 21:55:30.816090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:12.077 [2024-09-29 21:55:30.816099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:12.077 [2024-09-29 21:55:30.816106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.077 [2024-09-29 21:55:30.816113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:12.077 [2024-09-29 21:55:30.816120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:12.077 [2024-09-29 21:55:30.816126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:12.077 [2024-09-29 21:55:30.816133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:12.077 [2024-09-29 21:55:30.816139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:12.077 [2024-09-29 21:55:30.816145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:12.077 [2024-09-29 21:55:30.816153] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:12.077 [2024-09-29 21:55:30.816162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:12.077 [2024-09-29 21:55:30.816170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:12.077 [2024-09-29 21:55:30.816178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:12.077 [2024-09-29 21:55:30.816184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:12.077 [2024-09-29 21:55:30.816191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:12.077 [2024-09-29 21:55:30.816198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:12.077 [2024-09-29 21:55:30.816205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:12.077 [2024-09-29 21:55:30.816212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:12.077 [2024-09-29 21:55:30.816219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:12.077 [2024-09-29 21:55:30.816226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:12.077 [2024-09-29 21:55:30.816233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:12.077 [2024-09-29 21:55:30.816240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:12.077 [2024-09-29 21:55:30.816247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:12.077 [2024-09-29 21:55:30.816254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:12.077 [2024-09-29 21:55:30.816261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:12.077 [2024-09-29 21:55:30.816268] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:12.077 [2024-09-29 21:55:30.816275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:12.077 [2024-09-29 21:55:30.816283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:12.077 [2024-09-29 21:55:30.816290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:12.077 [2024-09-29 21:55:30.816297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:12.078 [2024-09-29 21:55:30.816304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:12.078 [2024-09-29 21:55:30.816312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.816320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:12.078 [2024-09-29 21:55:30.816327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:24:12.078 [2024-09-29 21:55:30.816334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.851483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.851529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:12.078 [2024-09-29 21:55:30.851544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.105 ms 00:24:12.078 [2024-09-29 21:55:30.851554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.851667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.851678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:12.078 [2024-09-29 21:55:30.851689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:12.078 [2024-09-29 21:55:30.851698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.882501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.882533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:12.078 [2024-09-29 21:55:30.882546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.737 ms 00:24:12.078 [2024-09-29 21:55:30.882553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.882582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.882590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:12.078 [2024-09-29 21:55:30.882598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:12.078 [2024-09-29 21:55:30.882605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.882961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.882983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:12.078 [2024-09-29 21:55:30.882993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:24:12.078 [2024-09-29 21:55:30.883003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.883127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.883141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:12.078 [2024-09-29 21:55:30.883149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:24:12.078 [2024-09-29 21:55:30.883156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.895670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.895699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:12.078 [2024-09-29 21:55:30.895708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.495 ms 00:24:12.078 [2024-09-29 21:55:30.895716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.908347] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:12.078 [2024-09-29 21:55:30.908382] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:12.078 [2024-09-29 21:55:30.908407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.908415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:12.078 [2024-09-29 21:55:30.908424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.590 ms 00:24:12.078 [2024-09-29 21:55:30.908430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.932767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.932800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:12.078 [2024-09-29 21:55:30.932810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.299 ms 00:24:12.078 [2024-09-29 21:55:30.932817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.944708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.944738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:12.078 [2024-09-29 21:55:30.944747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.854 ms 00:24:12.078 [2024-09-29 21:55:30.944754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.955935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.956062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:12.078 [2024-09-29 21:55:30.956080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.149 ms 00:24:12.078 [2024-09-29 21:55:30.956088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:30.956709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:30.956729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:12.078 [2024-09-29 21:55:30.956739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:24:12.078 [2024-09-29 21:55:30.956746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:31.011992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:31.012033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:12.078 [2024-09-29 21:55:31.012046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.228 ms 00:24:12.078 [2024-09-29 21:55:31.012053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:31.022412] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:12.078 [2024-09-29 21:55:31.024837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:31.024867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:12.078 [2024-09-29 21:55:31.024878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.739 ms 00:24:12.078 [2024-09-29 21:55:31.024886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:31.024968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:31.024978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:12.078 [2024-09-29 21:55:31.024987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:12.078 [2024-09-29 21:55:31.024994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:31.025056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:31.025067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:12.078 [2024-09-29 21:55:31.025075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:12.078 [2024-09-29 21:55:31.025082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:31.025102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:31.025112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:12.078 [2024-09-29 21:55:31.025120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:12.078 [2024-09-29 21:55:31.025127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:31.025157] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:12.078 [2024-09-29 21:55:31.025167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:31.025174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:12.078 [2024-09-29 21:55:31.025182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:12.078 [2024-09-29 21:55:31.025192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:31.048485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:31.048519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:12.078 [2024-09-29 21:55:31.048530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.276 ms 00:24:12.078 [2024-09-29 21:55:31.048538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:31.048606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.078 [2024-09-29 21:55:31.048616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:12.078 [2024-09-29 21:55:31.048624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:12.078 [2024-09-29 21:55:31.048632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.078 [2024-09-29 21:55:31.049511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.806 ms, result 0 00:25:07.651  Copying: 23/1024 [MB] (23 MBps) Copying: 49/1024 [MB] (26 MBps) Copying: 71/1024 [MB] (21 MBps) Copying: 89/1024 [MB] (18 MBps) Copying: 112/1024 [MB] (22 MBps) Copying: 133/1024 [MB] (21 MBps) Copying: 152/1024 [MB] (18 MBps) Copying: 171/1024 [MB] (19 MBps) Copying: 185/1024 [MB] (13 MBps) Copying: 200/1024 [MB] (15 MBps) Copying: 222/1024 [MB] (21 MBps) Copying: 237/1024 [MB] (14 MBps) Copying: 249/1024 [MB] (12 MBps) Copying: 267/1024 [MB] (17 MBps) Copying: 285/1024 [MB] (17 MBps) Copying: 303/1024 [MB] (18 MBps) Copying: 319/1024 [MB] (15 MBps) Copying: 336/1024 [MB] (16 MBps) Copying: 350/1024 [MB] (14 MBps) Copying: 360/1024 [MB] (10 MBps) Copying: 380/1024 [MB] (20 MBps) Copying: 408/1024 [MB] (27 MBps) Copying: 435/1024 [MB] (26 MBps) Copying: 445/1024 [MB] (10 MBps) Copying: 460/1024 [MB] (14 MBps) Copying: 473/1024 [MB] (13 MBps) Copying: 487/1024 [MB] (14 MBps) Copying: 512/1024 [MB] (24 MBps) Copying: 537/1024 [MB] (24 MBps) Copying: 564/1024 [MB] (27 MBps) Copying: 596/1024 [MB] (31 MBps) Copying: 622/1024 [MB] (25 MBps) Copying: 647/1024 [MB] (25 MBps) Copying: 676/1024 [MB] (28 MBps) Copying: 697/1024 [MB] (21 MBps) Copying: 711/1024 [MB] (14 MBps) Copying: 726/1024 [MB] (14 MBps) Copying: 744/1024 [MB] (17 MBps) Copying: 767/1024 [MB] (23 MBps) Copying: 780/1024 [MB] (13 MBps) Copying: 800/1024 [MB] (20 MBps) Copying: 811/1024 [MB] (10 MBps) Copying: 822/1024 [MB] (10 MBps) Copying: 833/1024 [MB] (10 MBps) Copying: 843/1024 [MB] (10 MBps) Copying: 853/1024 [MB] (10 MBps) Copying: 863/1024 [MB] (10 MBps) Copying: 880/1024 [MB] (16 MBps) Copying: 916/1024 [MB] (36 MBps) Copying: 929/1024 [MB] (12 MBps) Copying: 946/1024 [MB] (16 MBps) Copying: 961/1024 [MB] (15 MBps) Copying: 979/1024 [MB] (17 MBps) Copying: 1000/1024 [MB] (20 MBps) Copying: 1019/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 18 MBps)[2024-09-29 21:56:26.475154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.651 [2024-09-29 21:56:26.475198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:07.651 [2024-09-29 21:56:26.475212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:07.651 [2024-09-29 21:56:26.475221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.651 [2024-09-29 21:56:26.475246] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:07.651 [2024-09-29 21:56:26.478030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.651 [2024-09-29 21:56:26.478066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:07.651 [2024-09-29 21:56:26.478076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:25:07.651 [2024-09-29 21:56:26.478084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.651 [2024-09-29 21:56:26.481373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.651 [2024-09-29 21:56:26.481513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:07.651 [2024-09-29 21:56:26.481549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:25:07.651 [2024-09-29 21:56:26.481573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.651 [2024-09-29 21:56:26.481667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.651 [2024-09-29 21:56:26.481697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:25:07.651 [2024-09-29 21:56:26.481724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:07.651 [2024-09-29 21:56:26.481746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.651 [2024-09-29 21:56:26.481870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.651 [2024-09-29 21:56:26.481896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:25:07.651 [2024-09-29 21:56:26.481922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:07.651 [2024-09-29 21:56:26.481944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.651 [2024-09-29 21:56:26.481984] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:07.651 [2024-09-29 21:56:26.482018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.482368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.483092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.483459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.483687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.483789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.483823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.483848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:07.651 [2024-09-29 21:56:26.483871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.483894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.483918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.483941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.483965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.483989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.484982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:07.652 [2024-09-29 21:56:26.485859] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:07.652 [2024-09-29 21:56:26.485884] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a175fc5-6fdf-49f1-b3d6-57d4ea920dff 00:25:07.652 [2024-09-29 21:56:26.485908] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:07.652 [2024-09-29 21:56:26.485929] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:25:07.652 [2024-09-29 21:56:26.485949] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:07.652 [2024-09-29 21:56:26.485971] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:07.652 [2024-09-29 21:56:26.485992] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:07.652 [2024-09-29 21:56:26.486015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:07.652 [2024-09-29 21:56:26.486036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:07.652 [2024-09-29 21:56:26.486055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:07.652 [2024-09-29 21:56:26.486074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:07.652 [2024-09-29 21:56:26.486101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.653 [2024-09-29 21:56:26.486126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:07.653 [2024-09-29 21:56:26.486153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.117 ms 00:25:07.653 [2024-09-29 21:56:26.486183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.653 [2024-09-29 21:56:26.506269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.653 [2024-09-29 21:56:26.506309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:07.653 [2024-09-29 21:56:26.506320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.949 ms 00:25:07.653 [2024-09-29 21:56:26.506327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.653 [2024-09-29 21:56:26.506723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.653 [2024-09-29 21:56:26.506734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:07.653 [2024-09-29 21:56:26.506747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:25:07.653 [2024-09-29 21:56:26.506756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.653 [2024-09-29 21:56:26.536900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.653 [2024-09-29 21:56:26.536945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:07.653 [2024-09-29 21:56:26.536956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.653 [2024-09-29 21:56:26.536963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.653 [2024-09-29 21:56:26.537036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.653 [2024-09-29 21:56:26.537044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:07.653 [2024-09-29 21:56:26.537056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.653 [2024-09-29 21:56:26.537064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.653 [2024-09-29 21:56:26.537133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.653 [2024-09-29 21:56:26.537143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:07.653 [2024-09-29 21:56:26.537150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.653 [2024-09-29 21:56:26.537161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.653 [2024-09-29 21:56:26.537176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.653 [2024-09-29 21:56:26.537184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:07.653 [2024-09-29 21:56:26.537191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.653 [2024-09-29 21:56:26.537200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.653 [2024-09-29 21:56:26.623258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.653 [2024-09-29 21:56:26.623316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:07.653 [2024-09-29 21:56:26.623330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.653 [2024-09-29 21:56:26.623338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.913 [2024-09-29 21:56:26.695156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.913 [2024-09-29 21:56:26.695214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:07.913 [2024-09-29 21:56:26.695228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.913 [2024-09-29 21:56:26.695243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.913 [2024-09-29 21:56:26.695300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.913 [2024-09-29 21:56:26.695309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:07.913 [2024-09-29 21:56:26.695318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.913 [2024-09-29 21:56:26.695326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.913 [2024-09-29 21:56:26.695405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.913 [2024-09-29 21:56:26.695416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:07.913 [2024-09-29 21:56:26.695426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.913 [2024-09-29 21:56:26.695434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.913 [2024-09-29 21:56:26.695521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.913 [2024-09-29 21:56:26.695531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:07.913 [2024-09-29 21:56:26.695540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.913 [2024-09-29 21:56:26.695548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.913 [2024-09-29 21:56:26.695579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.913 [2024-09-29 21:56:26.695589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:07.913 [2024-09-29 21:56:26.695598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.913 [2024-09-29 21:56:26.695606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.913 [2024-09-29 21:56:26.695660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.913 [2024-09-29 21:56:26.695670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:07.913 [2024-09-29 21:56:26.695679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.913 [2024-09-29 21:56:26.695686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.913 [2024-09-29 21:56:26.695736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.913 [2024-09-29 21:56:26.695746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:07.913 [2024-09-29 21:56:26.695756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.913 [2024-09-29 21:56:26.695764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.913 [2024-09-29 21:56:26.695904] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 220.708 ms, result 0 00:25:08.856 00:25:08.856 00:25:09.116 21:56:27 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:09.117 [2024-09-29 21:56:27.922428] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:09.117 [2024-09-29 21:56:27.922559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79463 ] 00:25:09.117 [2024-09-29 21:56:28.071237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.377 [2024-09-29 21:56:28.295561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.638 [2024-09-29 21:56:28.588493] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.638 [2024-09-29 21:56:28.588576] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.901 [2024-09-29 21:56:28.750930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.751168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:09.901 [2024-09-29 21:56:28.751194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:09.901 [2024-09-29 21:56:28.751211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-09-29 21:56:28.751284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.751296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.901 [2024-09-29 21:56:28.751307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:09.901 [2024-09-29 21:56:28.751315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-09-29 21:56:28.751339] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:09.901 [2024-09-29 21:56:28.752122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:09.901 [2024-09-29 21:56:28.752143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.752152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.901 [2024-09-29 21:56:28.752163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:25:09.901 [2024-09-29 21:56:28.752172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-09-29 21:56:28.752448] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:25:09.901 [2024-09-29 21:56:28.752482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.752492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:09.901 [2024-09-29 21:56:28.752502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:09.901 [2024-09-29 21:56:28.752510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-09-29 21:56:28.752624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.752636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:09.901 [2024-09-29 21:56:28.752646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:09.901 [2024-09-29 21:56:28.752657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-09-29 21:56:28.753122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.753154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.901 [2024-09-29 21:56:28.753166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:25:09.901 [2024-09-29 21:56:28.753175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-09-29 21:56:28.753248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.753258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.901 [2024-09-29 21:56:28.753270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:09.901 [2024-09-29 21:56:28.753278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-09-29 21:56:28.753306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.753316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:09.901 [2024-09-29 21:56:28.753325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:09.901 [2024-09-29 21:56:28.753333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-09-29 21:56:28.753354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.901 [2024-09-29 21:56:28.757796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.758010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.901 [2024-09-29 21:56:28.758030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.447 ms 00:25:09.901 [2024-09-29 21:56:28.758038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.901 [2024-09-29 21:56:28.758079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.901 [2024-09-29 21:56:28.758087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:09.902 [2024-09-29 21:56:28.758102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:09.902 [2024-09-29 21:56:28.758110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.902 [2024-09-29 21:56:28.758170] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:09.902 [2024-09-29 21:56:28.758196] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:09.902 [2024-09-29 21:56:28.758233] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:09.902 [2024-09-29 21:56:28.758276] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:09.902 [2024-09-29 21:56:28.758382] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:09.902 [2024-09-29 21:56:28.758415] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:09.902 [2024-09-29 21:56:28.758427] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:09.902 [2024-09-29 21:56:28.758438] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:09.902 [2024-09-29 21:56:28.758448] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:09.902 [2024-09-29 21:56:28.758457] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:09.902 [2024-09-29 21:56:28.758464] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:09.902 [2024-09-29 21:56:28.758472] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:09.902 [2024-09-29 21:56:28.758480] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:09.902 [2024-09-29 21:56:28.758488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.902 [2024-09-29 21:56:28.758497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:09.902 [2024-09-29 21:56:28.758505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:25:09.902 [2024-09-29 21:56:28.758514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.902 [2024-09-29 21:56:28.758603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.902 [2024-09-29 21:56:28.758613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:09.902 [2024-09-29 21:56:28.758620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:09.902 [2024-09-29 21:56:28.758628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.902 [2024-09-29 21:56:28.758731] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:09.902 [2024-09-29 21:56:28.758743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:09.902 [2024-09-29 21:56:28.758751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.902 [2024-09-29 21:56:28.758760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:09.902 [2024-09-29 21:56:28.758778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:09.902 [2024-09-29 21:56:28.758793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:09.902 [2024-09-29 21:56:28.758803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.902 [2024-09-29 21:56:28.758819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:09.902 [2024-09-29 21:56:28.758827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:09.902 [2024-09-29 21:56:28.758834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.902 [2024-09-29 21:56:28.758841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:09.902 [2024-09-29 21:56:28.758848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:09.902 [2024-09-29 21:56:28.758860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:09.902 [2024-09-29 21:56:28.758874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:09.902 [2024-09-29 21:56:28.758882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:09.902 [2024-09-29 21:56:28.758896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.902 [2024-09-29 21:56:28.758909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:09.902 [2024-09-29 21:56:28.758916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.902 [2024-09-29 21:56:28.758930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:09.902 [2024-09-29 21:56:28.758936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.902 [2024-09-29 21:56:28.758950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:09.902 [2024-09-29 21:56:28.758957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.902 [2024-09-29 21:56:28.758970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:09.902 [2024-09-29 21:56:28.758977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:09.902 [2024-09-29 21:56:28.758983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.902 [2024-09-29 21:56:28.758990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:09.902 [2024-09-29 21:56:28.758996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:09.902 [2024-09-29 21:56:28.759003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.902 [2024-09-29 21:56:28.759009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:09.902 [2024-09-29 21:56:28.759015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:09.902 [2024-09-29 21:56:28.759022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.902 [2024-09-29 21:56:28.759028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:09.902 [2024-09-29 21:56:28.759035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:09.902 [2024-09-29 21:56:28.759042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.902 [2024-09-29 21:56:28.759052] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:09.902 [2024-09-29 21:56:28.759061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:09.902 [2024-09-29 21:56:28.759069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.902 [2024-09-29 21:56:28.759076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.902 [2024-09-29 21:56:28.759084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:09.902 [2024-09-29 21:56:28.759091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:09.902 [2024-09-29 21:56:28.759098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:09.902 [2024-09-29 21:56:28.759105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:09.902 [2024-09-29 21:56:28.759111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:09.902 [2024-09-29 21:56:28.759117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:09.902 [2024-09-29 21:56:28.759126] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:09.902 [2024-09-29 21:56:28.759135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.902 [2024-09-29 21:56:28.759146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:09.902 [2024-09-29 21:56:28.759153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:09.902 [2024-09-29 21:56:28.759160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:09.902 [2024-09-29 21:56:28.759167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:09.902 [2024-09-29 21:56:28.759175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:09.902 [2024-09-29 21:56:28.759182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:09.902 [2024-09-29 21:56:28.759189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:09.902 [2024-09-29 21:56:28.759195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:09.902 [2024-09-29 21:56:28.759202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:09.902 [2024-09-29 21:56:28.759209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:09.902 [2024-09-29 21:56:28.759218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:09.902 [2024-09-29 21:56:28.759226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:09.902 [2024-09-29 21:56:28.759233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:09.902 [2024-09-29 21:56:28.759240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:09.903 [2024-09-29 21:56:28.759247] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:09.903 [2024-09-29 21:56:28.759255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.903 [2024-09-29 21:56:28.759264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:09.903 [2024-09-29 21:56:28.759271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:09.903 [2024-09-29 21:56:28.759282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:09.903 [2024-09-29 21:56:28.759290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:09.903 [2024-09-29 21:56:28.759299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.759308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:09.903 [2024-09-29 21:56:28.759318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:25:09.903 [2024-09-29 21:56:28.759326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.802964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.803109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:09.903 [2024-09-29 21:56:28.803133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.596 ms 00:25:09.903 [2024-09-29 21:56:28.803142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.803244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.803255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:09.903 [2024-09-29 21:56:28.803268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:09.903 [2024-09-29 21:56:28.803276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.834199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.834332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.903 [2024-09-29 21:56:28.834348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.861 ms 00:25:09.903 [2024-09-29 21:56:28.834357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.834403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.834412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.903 [2024-09-29 21:56:28.834421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:09.903 [2024-09-29 21:56:28.834428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.834515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.834530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.903 [2024-09-29 21:56:28.834538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:09.903 [2024-09-29 21:56:28.834546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.834657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.834666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.903 [2024-09-29 21:56:28.834675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:09.903 [2024-09-29 21:56:28.834682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.847654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.847686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.903 [2024-09-29 21:56:28.847696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.954 ms 00:25:09.903 [2024-09-29 21:56:28.847703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.847831] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:09.903 [2024-09-29 21:56:28.847844] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:09.903 [2024-09-29 21:56:28.847854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.847861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:09.903 [2024-09-29 21:56:28.847870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:09.903 [2024-09-29 21:56:28.847877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.860288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.860319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:09.903 [2024-09-29 21:56:28.860329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.396 ms 00:25:09.903 [2024-09-29 21:56:28.860340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.860468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.860479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:09.903 [2024-09-29 21:56:28.860487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:25:09.903 [2024-09-29 21:56:28.860494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.860558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.860568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:09.903 [2024-09-29 21:56:28.860577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:25:09.903 [2024-09-29 21:56:28.860584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.861140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.861164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:09.903 [2024-09-29 21:56:28.861173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:25:09.903 [2024-09-29 21:56:28.861180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.861195] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:25:09.903 [2024-09-29 21:56:28.861205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.861212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:09.903 [2024-09-29 21:56:28.861220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:09.903 [2024-09-29 21:56:28.861227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.872676] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.903 [2024-09-29 21:56:28.872817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.872831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.903 [2024-09-29 21:56:28.872840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.573 ms 00:25:09.903 [2024-09-29 21:56:28.872848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.875001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.875027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:09.903 [2024-09-29 21:56:28.875036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:25:09.903 [2024-09-29 21:56:28.875044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.875122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.875136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.903 [2024-09-29 21:56:28.875144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:09.903 [2024-09-29 21:56:28.875152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.875174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.875182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:09.903 [2024-09-29 21:56:28.875190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:09.903 [2024-09-29 21:56:28.875197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.903 [2024-09-29 21:56:28.875223] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:09.903 [2024-09-29 21:56:28.875233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.903 [2024-09-29 21:56:28.875240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:09.903 [2024-09-29 21:56:28.875250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.903 [2024-09-29 21:56:28.875258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.165 [2024-09-29 21:56:28.900530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.165 [2024-09-29 21:56:28.900574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:10.165 [2024-09-29 21:56:28.900587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.252 ms 00:25:10.165 [2024-09-29 21:56:28.900596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.165 [2024-09-29 21:56:28.900679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.165 [2024-09-29 21:56:28.900694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:10.165 [2024-09-29 21:56:28.900703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:10.165 [2024-09-29 21:56:28.900710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.165 [2024-09-29 21:56:28.901739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.366 ms, result 0 00:26:13.320  Copying: 11/1024 [MB] (11 MBps) Copying: 36/1024 [MB] (25 MBps) Copying: 54/1024 [MB] (17 MBps) Copying: 65/1024 [MB] (10 MBps) Copying: 77/1024 [MB] (12 MBps) Copying: 89/1024 [MB] (12 MBps) Copying: 100/1024 [MB] (11 MBps) Copying: 116/1024 [MB] (15 MBps) Copying: 130/1024 [MB] (14 MBps) Copying: 146/1024 [MB] (15 MBps) Copying: 158/1024 [MB] (11 MBps) Copying: 174/1024 [MB] (16 MBps) Copying: 192/1024 [MB] (18 MBps) Copying: 207/1024 [MB] (15 MBps) Copying: 231/1024 [MB] (23 MBps) Copying: 250/1024 [MB] (19 MBps) Copying: 267/1024 [MB] (16 MBps) Copying: 278/1024 [MB] (11 MBps) Copying: 289/1024 [MB] (10 MBps) Copying: 299/1024 [MB] (10 MBps) Copying: 310/1024 [MB] (10 MBps) Copying: 321/1024 [MB] (10 MBps) Copying: 332/1024 [MB] (11 MBps) Copying: 344/1024 [MB] (12 MBps) Copying: 358/1024 [MB] (13 MBps) Copying: 381/1024 [MB] (22 MBps) Copying: 402/1024 [MB] (20 MBps) Copying: 417/1024 [MB] (15 MBps) Copying: 437/1024 [MB] (19 MBps) Copying: 460/1024 [MB] (22 MBps) Copying: 485/1024 [MB] (24 MBps) Copying: 497/1024 [MB] (12 MBps) Copying: 512/1024 [MB] (15 MBps) Copying: 525/1024 [MB] (12 MBps) Copying: 545/1024 [MB] (20 MBps) Copying: 560/1024 [MB] (14 MBps) Copying: 576/1024 [MB] (16 MBps) Copying: 599/1024 [MB] (22 MBps) Copying: 610/1024 [MB] (11 MBps) Copying: 623/1024 [MB] (12 MBps) Copying: 635/1024 [MB] (12 MBps) Copying: 653/1024 [MB] (17 MBps) Copying: 667/1024 [MB] (14 MBps) Copying: 684/1024 [MB] (17 MBps) Copying: 704/1024 [MB] (19 MBps) Copying: 725/1024 [MB] (21 MBps) Copying: 743/1024 [MB] (17 MBps) Copying: 767/1024 [MB] (24 MBps) Copying: 790/1024 [MB] (23 MBps) Copying: 802/1024 [MB] (11 MBps) Copying: 813/1024 [MB] (11 MBps) Copying: 828/1024 [MB] (15 MBps) Copying: 846/1024 [MB] (18 MBps) Copying: 857/1024 [MB] (11 MBps) Copying: 883/1024 [MB] (25 MBps) Copying: 904/1024 [MB] (20 MBps) Copying: 915/1024 [MB] (11 MBps) Copying: 927/1024 [MB] (11 MBps) Copying: 945/1024 [MB] (17 MBps) Copying: 965/1024 [MB] (19 MBps) Copying: 984/1024 [MB] (19 MBps) Copying: 1006/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-09-29 21:57:32.163517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.320 [2024-09-29 21:57:32.163600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:13.320 [2024-09-29 21:57:32.163616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:13.320 [2024-09-29 21:57:32.163625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.320 [2024-09-29 21:57:32.163648] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:13.320 [2024-09-29 21:57:32.166607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.320 [2024-09-29 21:57:32.167260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:13.320 [2024-09-29 21:57:32.167283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:26:13.320 [2024-09-29 21:57:32.167292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.320 [2024-09-29 21:57:32.167541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.320 [2024-09-29 21:57:32.167556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:13.320 [2024-09-29 21:57:32.167565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:26:13.320 [2024-09-29 21:57:32.167574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.320 [2024-09-29 21:57:32.167601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.320 [2024-09-29 21:57:32.167610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:13.320 [2024-09-29 21:57:32.167619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:13.320 [2024-09-29 21:57:32.167626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.320 [2024-09-29 21:57:32.167677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.320 [2024-09-29 21:57:32.167687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:13.320 [2024-09-29 21:57:32.167698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:13.320 [2024-09-29 21:57:32.167706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.320 [2024-09-29 21:57:32.167719] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:13.320 [2024-09-29 21:57:32.167731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.167993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:13.320 [2024-09-29 21:57:32.168091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:13.321 [2024-09-29 21:57:32.168534] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:13.321 [2024-09-29 21:57:32.168543] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a175fc5-6fdf-49f1-b3d6-57d4ea920dff 00:26:13.321 [2024-09-29 21:57:32.168551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:13.321 [2024-09-29 21:57:32.168558] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:13.321 [2024-09-29 21:57:32.168565] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:13.321 [2024-09-29 21:57:32.168573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:13.321 [2024-09-29 21:57:32.168580] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:13.321 [2024-09-29 21:57:32.168592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:13.321 [2024-09-29 21:57:32.168599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:13.321 [2024-09-29 21:57:32.168605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:13.321 [2024-09-29 21:57:32.168612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:13.321 [2024-09-29 21:57:32.168619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.321 [2024-09-29 21:57:32.168627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:13.321 [2024-09-29 21:57:32.168635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.901 ms 00:26:13.321 [2024-09-29 21:57:32.168642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.321 [2024-09-29 21:57:32.183205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.321 [2024-09-29 21:57:32.183242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:13.321 [2024-09-29 21:57:32.183253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.543 ms 00:26:13.321 [2024-09-29 21:57:32.183267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.321 [2024-09-29 21:57:32.184257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.321 [2024-09-29 21:57:32.184283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:13.321 [2024-09-29 21:57:32.184293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:26:13.321 [2024-09-29 21:57:32.184302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.321 [2024-09-29 21:57:32.215148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.321 [2024-09-29 21:57:32.215331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:13.321 [2024-09-29 21:57:32.215352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.321 [2024-09-29 21:57:32.215367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.321 [2024-09-29 21:57:32.215462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.321 [2024-09-29 21:57:32.215473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:13.321 [2024-09-29 21:57:32.215483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.321 [2024-09-29 21:57:32.215492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.321 [2024-09-29 21:57:32.215557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.321 [2024-09-29 21:57:32.215568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:13.321 [2024-09-29 21:57:32.215577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.321 [2024-09-29 21:57:32.215586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.321 [2024-09-29 21:57:32.215607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.321 [2024-09-29 21:57:32.215616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:13.321 [2024-09-29 21:57:32.215624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.321 [2024-09-29 21:57:32.215632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.321 [2024-09-29 21:57:32.300177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.321 [2024-09-29 21:57:32.300233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:13.321 [2024-09-29 21:57:32.300246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.321 [2024-09-29 21:57:32.300261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.582 [2024-09-29 21:57:32.371476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.582 [2024-09-29 21:57:32.371541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:13.582 [2024-09-29 21:57:32.371556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.582 [2024-09-29 21:57:32.371565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.582 [2024-09-29 21:57:32.371661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.582 [2024-09-29 21:57:32.371672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:13.582 [2024-09-29 21:57:32.371681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.583 [2024-09-29 21:57:32.371689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.583 [2024-09-29 21:57:32.371731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.583 [2024-09-29 21:57:32.371741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:13.583 [2024-09-29 21:57:32.371750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.583 [2024-09-29 21:57:32.371757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.583 [2024-09-29 21:57:32.371836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.583 [2024-09-29 21:57:32.371846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:13.583 [2024-09-29 21:57:32.371855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.583 [2024-09-29 21:57:32.371864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.583 [2024-09-29 21:57:32.371893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.583 [2024-09-29 21:57:32.371905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:13.583 [2024-09-29 21:57:32.371914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.583 [2024-09-29 21:57:32.371922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.583 [2024-09-29 21:57:32.371963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.583 [2024-09-29 21:57:32.371974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:13.583 [2024-09-29 21:57:32.371982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.583 [2024-09-29 21:57:32.371991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.583 [2024-09-29 21:57:32.372035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.583 [2024-09-29 21:57:32.372049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:13.583 [2024-09-29 21:57:32.372058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.583 [2024-09-29 21:57:32.372066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.583 [2024-09-29 21:57:32.372194] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 208.644 ms, result 0 00:26:14.525 00:26:14.525 00:26:14.525 21:57:33 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:16.437 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:16.437 21:57:35 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:16.437 [2024-09-29 21:57:35.323458] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:16.437 [2024-09-29 21:57:35.323736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80142 ] 00:26:16.698 [2024-09-29 21:57:35.472752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.959 [2024-09-29 21:57:35.690636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.222 [2024-09-29 21:57:35.977851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.222 [2024-09-29 21:57:35.977932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.222 [2024-09-29 21:57:36.139413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.139668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:17.222 [2024-09-29 21:57:36.139691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:17.222 [2024-09-29 21:57:36.139709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.139777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.139790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.222 [2024-09-29 21:57:36.139799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:17.222 [2024-09-29 21:57:36.139808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.139831] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:17.222 [2024-09-29 21:57:36.141062] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:17.222 [2024-09-29 21:57:36.141122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.141134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.222 [2024-09-29 21:57:36.141145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.297 ms 00:26:17.222 [2024-09-29 21:57:36.141153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.141630] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:17.222 [2024-09-29 21:57:36.141675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.141685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:17.222 [2024-09-29 21:57:36.141695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:17.222 [2024-09-29 21:57:36.141704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.141757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.141767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:17.222 [2024-09-29 21:57:36.141775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:17.222 [2024-09-29 21:57:36.141785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.142058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.142070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.222 [2024-09-29 21:57:36.142079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:26:17.222 [2024-09-29 21:57:36.142088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.142157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.142167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.222 [2024-09-29 21:57:36.142178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:17.222 [2024-09-29 21:57:36.142186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.142209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.142234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:17.222 [2024-09-29 21:57:36.142243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:17.222 [2024-09-29 21:57:36.142252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.142274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:17.222 [2024-09-29 21:57:36.146540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.146719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.222 [2024-09-29 21:57:36.146738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.271 ms 00:26:17.222 [2024-09-29 21:57:36.146746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.146783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.146792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:17.222 [2024-09-29 21:57:36.146806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:17.222 [2024-09-29 21:57:36.146813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.146874] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:17.222 [2024-09-29 21:57:36.146899] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:17.222 [2024-09-29 21:57:36.146935] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:17.222 [2024-09-29 21:57:36.146951] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:17.222 [2024-09-29 21:57:36.147057] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:17.222 [2024-09-29 21:57:36.147072] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:17.222 [2024-09-29 21:57:36.147083] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:17.222 [2024-09-29 21:57:36.147094] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:17.222 [2024-09-29 21:57:36.147104] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:17.222 [2024-09-29 21:57:36.147112] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:17.222 [2024-09-29 21:57:36.147120] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:17.222 [2024-09-29 21:57:36.147128] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:17.222 [2024-09-29 21:57:36.147136] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:17.222 [2024-09-29 21:57:36.147144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.147152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:17.222 [2024-09-29 21:57:36.147161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:26:17.222 [2024-09-29 21:57:36.147171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.147257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.222 [2024-09-29 21:57:36.147266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:17.222 [2024-09-29 21:57:36.147274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:17.222 [2024-09-29 21:57:36.147282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.222 [2024-09-29 21:57:36.147408] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:17.222 [2024-09-29 21:57:36.147421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:17.222 [2024-09-29 21:57:36.147431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.222 [2024-09-29 21:57:36.147439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.222 [2024-09-29 21:57:36.147451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:17.222 [2024-09-29 21:57:36.147458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:17.222 [2024-09-29 21:57:36.147465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:17.222 [2024-09-29 21:57:36.147473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:17.222 [2024-09-29 21:57:36.147482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:17.222 [2024-09-29 21:57:36.147490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.222 [2024-09-29 21:57:36.147498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:17.222 [2024-09-29 21:57:36.147506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:17.222 [2024-09-29 21:57:36.147513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.222 [2024-09-29 21:57:36.147520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:17.222 [2024-09-29 21:57:36.147528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:17.222 [2024-09-29 21:57:36.147546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.222 [2024-09-29 21:57:36.147553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:17.222 [2024-09-29 21:57:36.147559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:17.222 [2024-09-29 21:57:36.147565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.223 [2024-09-29 21:57:36.147572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:17.223 [2024-09-29 21:57:36.147579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:17.223 [2024-09-29 21:57:36.147586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.223 [2024-09-29 21:57:36.147593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:17.223 [2024-09-29 21:57:36.147599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:17.223 [2024-09-29 21:57:36.147607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.223 [2024-09-29 21:57:36.147613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:17.223 [2024-09-29 21:57:36.147620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:17.223 [2024-09-29 21:57:36.147627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.223 [2024-09-29 21:57:36.147634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:17.223 [2024-09-29 21:57:36.147640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:17.223 [2024-09-29 21:57:36.147647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.223 [2024-09-29 21:57:36.147655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:17.223 [2024-09-29 21:57:36.147664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:17.223 [2024-09-29 21:57:36.147671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.223 [2024-09-29 21:57:36.147679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:17.223 [2024-09-29 21:57:36.147686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:17.223 [2024-09-29 21:57:36.147693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.223 [2024-09-29 21:57:36.147700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:17.223 [2024-09-29 21:57:36.147707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:17.223 [2024-09-29 21:57:36.147715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.223 [2024-09-29 21:57:36.147722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:17.223 [2024-09-29 21:57:36.147729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:17.223 [2024-09-29 21:57:36.147736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.223 [2024-09-29 21:57:36.147742] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:17.223 [2024-09-29 21:57:36.147752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:17.223 [2024-09-29 21:57:36.147759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.223 [2024-09-29 21:57:36.147768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.223 [2024-09-29 21:57:36.147776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:17.223 [2024-09-29 21:57:36.147783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:17.223 [2024-09-29 21:57:36.147789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:17.223 [2024-09-29 21:57:36.147796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:17.223 [2024-09-29 21:57:36.147803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:17.223 [2024-09-29 21:57:36.147809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:17.223 [2024-09-29 21:57:36.147817] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:17.223 [2024-09-29 21:57:36.147828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.223 [2024-09-29 21:57:36.147837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:17.223 [2024-09-29 21:57:36.147844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:17.223 [2024-09-29 21:57:36.147851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:17.223 [2024-09-29 21:57:36.147859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:17.223 [2024-09-29 21:57:36.147867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:17.223 [2024-09-29 21:57:36.147875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:17.223 [2024-09-29 21:57:36.147883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:17.223 [2024-09-29 21:57:36.147890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:17.223 [2024-09-29 21:57:36.147897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:17.223 [2024-09-29 21:57:36.147905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:17.223 [2024-09-29 21:57:36.147913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:17.223 [2024-09-29 21:57:36.147921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:17.223 [2024-09-29 21:57:36.147929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:17.223 [2024-09-29 21:57:36.147936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:17.223 [2024-09-29 21:57:36.147943] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:17.223 [2024-09-29 21:57:36.147952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.223 [2024-09-29 21:57:36.147961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:17.223 [2024-09-29 21:57:36.147969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:17.223 [2024-09-29 21:57:36.147978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:17.223 [2024-09-29 21:57:36.147985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:17.223 [2024-09-29 21:57:36.147993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.223 [2024-09-29 21:57:36.148001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:17.223 [2024-09-29 21:57:36.148011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:26:17.223 [2024-09-29 21:57:36.148019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.223 [2024-09-29 21:57:36.184814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.223 [2024-09-29 21:57:36.185002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.223 [2024-09-29 21:57:36.185029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.752 ms 00:26:17.223 [2024-09-29 21:57:36.185038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.223 [2024-09-29 21:57:36.185131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.223 [2024-09-29 21:57:36.185140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:17.223 [2024-09-29 21:57:36.185154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:17.223 [2024-09-29 21:57:36.185162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.219721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.219898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.484 [2024-09-29 21:57:36.219917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.495 ms 00:26:17.484 [2024-09-29 21:57:36.219926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.219964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.219975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.484 [2024-09-29 21:57:36.219984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:17.484 [2024-09-29 21:57:36.219993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.220099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.220118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.484 [2024-09-29 21:57:36.220127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:17.484 [2024-09-29 21:57:36.220134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.220261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.220271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.484 [2024-09-29 21:57:36.220279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:26:17.484 [2024-09-29 21:57:36.220288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.234921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.235089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.484 [2024-09-29 21:57:36.235107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.614 ms 00:26:17.484 [2024-09-29 21:57:36.235117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.235267] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:17.484 [2024-09-29 21:57:36.235281] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:17.484 [2024-09-29 21:57:36.235291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.235299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:17.484 [2024-09-29 21:57:36.235309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:17.484 [2024-09-29 21:57:36.235317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.247619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.247658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:17.484 [2024-09-29 21:57:36.247669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.285 ms 00:26:17.484 [2024-09-29 21:57:36.247682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.247811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.247821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:17.484 [2024-09-29 21:57:36.247830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:26:17.484 [2024-09-29 21:57:36.247839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.247889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.247899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:17.484 [2024-09-29 21:57:36.247907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:26:17.484 [2024-09-29 21:57:36.247915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.248533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.248549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:17.484 [2024-09-29 21:57:36.248558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:26:17.484 [2024-09-29 21:57:36.248566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.484 [2024-09-29 21:57:36.248584] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:17.484 [2024-09-29 21:57:36.248595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.484 [2024-09-29 21:57:36.248604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:17.484 [2024-09-29 21:57:36.248612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:17.484 [2024-09-29 21:57:36.248620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.485 [2024-09-29 21:57:36.261149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:17.485 [2024-09-29 21:57:36.261311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.485 [2024-09-29 21:57:36.261327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:17.485 [2024-09-29 21:57:36.261338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.672 ms 00:26:17.485 [2024-09-29 21:57:36.261346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.485 [2024-09-29 21:57:36.263793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.485 [2024-09-29 21:57:36.263830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:17.485 [2024-09-29 21:57:36.263840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.424 ms 00:26:17.485 [2024-09-29 21:57:36.263848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.485 [2024-09-29 21:57:36.263939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.485 [2024-09-29 21:57:36.263955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:17.485 [2024-09-29 21:57:36.263964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:17.485 [2024-09-29 21:57:36.263973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.485 [2024-09-29 21:57:36.263998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.485 [2024-09-29 21:57:36.264008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:17.485 [2024-09-29 21:57:36.264016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:17.485 [2024-09-29 21:57:36.264024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.485 [2024-09-29 21:57:36.264055] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:17.485 [2024-09-29 21:57:36.264065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.485 [2024-09-29 21:57:36.264074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:17.485 [2024-09-29 21:57:36.264085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:17.485 [2024-09-29 21:57:36.264092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.485 [2024-09-29 21:57:36.291392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.485 [2024-09-29 21:57:36.291445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:17.485 [2024-09-29 21:57:36.291458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.270 ms 00:26:17.485 [2024-09-29 21:57:36.291467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.485 [2024-09-29 21:57:36.291555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.485 [2024-09-29 21:57:36.291571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:17.485 [2024-09-29 21:57:36.291580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:17.485 [2024-09-29 21:57:36.291589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.485 [2024-09-29 21:57:36.292759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.887 ms, result 0 00:27:12.762  Copying: 17/1024 [MB] (17 MBps) Copying: 38/1024 [MB] (20 MBps) Copying: 59/1024 [MB] (20 MBps) Copying: 74/1024 [MB] (15 MBps) Copying: 92/1024 [MB] (17 MBps) Copying: 104/1024 [MB] (12 MBps) Copying: 115/1024 [MB] (11 MBps) Copying: 132/1024 [MB] (16 MBps) Copying: 146/1024 [MB] (13 MBps) Copying: 162/1024 [MB] (15 MBps) Copying: 182/1024 [MB] (20 MBps) Copying: 200/1024 [MB] (18 MBps) Copying: 211/1024 [MB] (10 MBps) Copying: 234/1024 [MB] (22 MBps) Copying: 245/1024 [MB] (10 MBps) Copying: 263/1024 [MB] (17 MBps) Copying: 279/1024 [MB] (16 MBps) Copying: 296/1024 [MB] (16 MBps) Copying: 312/1024 [MB] (16 MBps) Copying: 330/1024 [MB] (17 MBps) Copying: 353/1024 [MB] (22 MBps) Copying: 390/1024 [MB] (37 MBps) Copying: 413/1024 [MB] (22 MBps) Copying: 434/1024 [MB] (20 MBps) Copying: 454/1024 [MB] (19 MBps) Copying: 478/1024 [MB] (24 MBps) Copying: 499/1024 [MB] (21 MBps) Copying: 527/1024 [MB] (27 MBps) Copying: 550/1024 [MB] (23 MBps) Copying: 567/1024 [MB] (17 MBps) Copying: 585/1024 [MB] (17 MBps) Copying: 608/1024 [MB] (22 MBps) Copying: 627/1024 [MB] (19 MBps) Copying: 643/1024 [MB] (16 MBps) Copying: 663/1024 [MB] (19 MBps) Copying: 677/1024 [MB] (14 MBps) Copying: 700/1024 [MB] (22 MBps) Copying: 720/1024 [MB] (20 MBps) Copying: 742/1024 [MB] (21 MBps) Copying: 766/1024 [MB] (23 MBps) Copying: 797/1024 [MB] (31 MBps) Copying: 809/1024 [MB] (11 MBps) Copying: 828/1024 [MB] (19 MBps) Copying: 841/1024 [MB] (12 MBps) Copying: 856/1024 [MB] (15 MBps) Copying: 878/1024 [MB] (21 MBps) Copying: 901/1024 [MB] (22 MBps) Copying: 923/1024 [MB] (21 MBps) Copying: 939/1024 [MB] (16 MBps) Copying: 961/1024 [MB] (22 MBps) Copying: 975/1024 [MB] (13 MBps) Copying: 990/1024 [MB] (14 MBps) Copying: 1002/1024 [MB] (12 MBps) Copying: 1023/1024 [MB] (20 MBps) Copying: 1048508/1048576 [kB] (888 kBps) Copying: 1024/1024 [MB] (average 18 MBps)[2024-09-29 21:58:31.399779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.762 [2024-09-29 21:58:31.399855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:12.762 [2024-09-29 21:58:31.399872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:12.762 [2024-09-29 21:58:31.399882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.762 [2024-09-29 21:58:31.401191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:12.762 [2024-09-29 21:58:31.407112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.762 [2024-09-29 21:58:31.407171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:12.762 [2024-09-29 21:58:31.407185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.882 ms 00:27:12.762 [2024-09-29 21:58:31.407194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.762 [2024-09-29 21:58:31.417906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.762 [2024-09-29 21:58:31.418095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:12.762 [2024-09-29 21:58:31.418117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.286 ms 00:27:12.762 [2024-09-29 21:58:31.418128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.762 [2024-09-29 21:58:31.418165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.762 [2024-09-29 21:58:31.418176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:12.762 [2024-09-29 21:58:31.418193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:12.762 [2024-09-29 21:58:31.418202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.762 [2024-09-29 21:58:31.418277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.762 [2024-09-29 21:58:31.418288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:12.762 [2024-09-29 21:58:31.418297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:12.762 [2024-09-29 21:58:31.418305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.762 [2024-09-29 21:58:31.418320] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:12.762 [2024-09-29 21:58:31.418332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:27:12.762 [2024-09-29 21:58:31.418343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:12.762 [2024-09-29 21:58:31.418817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.418998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:12.763 [2024-09-29 21:58:31.419208] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:12.763 [2024-09-29 21:58:31.419217] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a175fc5-6fdf-49f1-b3d6-57d4ea920dff 00:27:12.763 [2024-09-29 21:58:31.419230] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:27:12.763 [2024-09-29 21:58:31.419238] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128800 00:27:12.763 [2024-09-29 21:58:31.419246] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:27:12.763 [2024-09-29 21:58:31.419254] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:27:12.763 [2024-09-29 21:58:31.419261] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:12.763 [2024-09-29 21:58:31.419269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:12.763 [2024-09-29 21:58:31.419278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:12.763 [2024-09-29 21:58:31.419285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:12.763 [2024-09-29 21:58:31.419292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:12.763 [2024-09-29 21:58:31.419299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.763 [2024-09-29 21:58:31.419308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:12.763 [2024-09-29 21:58:31.419318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:27:12.763 [2024-09-29 21:58:31.419326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.433063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.763 [2024-09-29 21:58:31.433219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:12.763 [2024-09-29 21:58:31.433238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.720 ms 00:27:12.763 [2024-09-29 21:58:31.433247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.433663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.763 [2024-09-29 21:58:31.433686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:12.763 [2024-09-29 21:58:31.433696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:27:12.763 [2024-09-29 21:58:31.433711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.465248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.763 [2024-09-29 21:58:31.465298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:12.763 [2024-09-29 21:58:31.465311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.763 [2024-09-29 21:58:31.465320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.465382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.763 [2024-09-29 21:58:31.465412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:12.763 [2024-09-29 21:58:31.465422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.763 [2024-09-29 21:58:31.465436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.465519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.763 [2024-09-29 21:58:31.465532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:12.763 [2024-09-29 21:58:31.465542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.763 [2024-09-29 21:58:31.465551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.465570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.763 [2024-09-29 21:58:31.465580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:12.763 [2024-09-29 21:58:31.465588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.763 [2024-09-29 21:58:31.465598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.550009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.763 [2024-09-29 21:58:31.550059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:12.763 [2024-09-29 21:58:31.550080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.763 [2024-09-29 21:58:31.550089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.620605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.763 [2024-09-29 21:58:31.620666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:12.763 [2024-09-29 21:58:31.620681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.763 [2024-09-29 21:58:31.620697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.620773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.763 [2024-09-29 21:58:31.620786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:12.763 [2024-09-29 21:58:31.620795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.763 [2024-09-29 21:58:31.620803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.763 [2024-09-29 21:58:31.620862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.763 [2024-09-29 21:58:31.620873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:12.764 [2024-09-29 21:58:31.620882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.764 [2024-09-29 21:58:31.620890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.764 [2024-09-29 21:58:31.620973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.764 [2024-09-29 21:58:31.620984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:12.764 [2024-09-29 21:58:31.620993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.764 [2024-09-29 21:58:31.621001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.764 [2024-09-29 21:58:31.621030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.764 [2024-09-29 21:58:31.621040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:12.764 [2024-09-29 21:58:31.621048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.764 [2024-09-29 21:58:31.621056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.764 [2024-09-29 21:58:31.621100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.764 [2024-09-29 21:58:31.621110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:12.764 [2024-09-29 21:58:31.621119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.764 [2024-09-29 21:58:31.621127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.764 [2024-09-29 21:58:31.621174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:12.764 [2024-09-29 21:58:31.621185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:12.764 [2024-09-29 21:58:31.621194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:12.764 [2024-09-29 21:58:31.621203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.764 [2024-09-29 21:58:31.621338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 224.361 ms, result 0 00:27:14.232 00:27:14.232 00:27:14.233 21:58:33 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:14.233 [2024-09-29 21:58:33.182762] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:27:14.233 [2024-09-29 21:58:33.183154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80730 ] 00:27:14.518 [2024-09-29 21:58:33.335029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.779 [2024-09-29 21:58:33.559886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.040 [2024-09-29 21:58:33.846645] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:15.040 [2024-09-29 21:58:33.846727] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:15.040 [2024-09-29 21:58:34.007680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.007739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:15.040 [2024-09-29 21:58:34.007754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:15.040 [2024-09-29 21:58:34.007768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.007823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.007833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:15.040 [2024-09-29 21:58:34.007843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:15.040 [2024-09-29 21:58:34.007850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.007871] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:15.040 [2024-09-29 21:58:34.008593] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:15.040 [2024-09-29 21:58:34.008612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.008621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:15.040 [2024-09-29 21:58:34.008630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:27:15.040 [2024-09-29 21:58:34.008637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.008918] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:15.040 [2024-09-29 21:58:34.008945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.008954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:15.040 [2024-09-29 21:58:34.008963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:15.040 [2024-09-29 21:58:34.008971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.009024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.009034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:15.040 [2024-09-29 21:58:34.009042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:15.040 [2024-09-29 21:58:34.009052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.009407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.009435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:15.040 [2024-09-29 21:58:34.009450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:27:15.040 [2024-09-29 21:58:34.009462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.009607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.009625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:15.040 [2024-09-29 21:58:34.009644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:15.040 [2024-09-29 21:58:34.009654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.009680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.009690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:15.040 [2024-09-29 21:58:34.009698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:15.040 [2024-09-29 21:58:34.009705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.009727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:15.040 [2024-09-29 21:58:34.014037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.014077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:15.040 [2024-09-29 21:58:34.014088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.315 ms 00:27:15.040 [2024-09-29 21:58:34.014095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.014133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.014140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:15.040 [2024-09-29 21:58:34.014152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:15.040 [2024-09-29 21:58:34.014159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.014234] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:15.040 [2024-09-29 21:58:34.014260] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:15.040 [2024-09-29 21:58:34.014296] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:15.040 [2024-09-29 21:58:34.014312] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:15.040 [2024-09-29 21:58:34.014441] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:15.040 [2024-09-29 21:58:34.014457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:15.040 [2024-09-29 21:58:34.014468] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:15.040 [2024-09-29 21:58:34.014478] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:15.040 [2024-09-29 21:58:34.014488] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:15.040 [2024-09-29 21:58:34.014496] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:15.040 [2024-09-29 21:58:34.014504] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:15.040 [2024-09-29 21:58:34.014511] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:15.040 [2024-09-29 21:58:34.014518] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:15.040 [2024-09-29 21:58:34.014527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.014535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:15.040 [2024-09-29 21:58:34.014542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:27:15.040 [2024-09-29 21:58:34.014553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.014646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.040 [2024-09-29 21:58:34.014655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:15.040 [2024-09-29 21:58:34.014663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:15.040 [2024-09-29 21:58:34.014671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.040 [2024-09-29 21:58:34.014778] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:15.040 [2024-09-29 21:58:34.014793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:15.040 [2024-09-29 21:58:34.014801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:15.040 [2024-09-29 21:58:34.014809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.040 [2024-09-29 21:58:34.014819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:15.040 [2024-09-29 21:58:34.014826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:15.040 [2024-09-29 21:58:34.014833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:15.040 [2024-09-29 21:58:34.014842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:15.040 [2024-09-29 21:58:34.014850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:15.040 [2024-09-29 21:58:34.014857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:15.040 [2024-09-29 21:58:34.014863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:15.040 [2024-09-29 21:58:34.014870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:15.040 [2024-09-29 21:58:34.014880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:15.040 [2024-09-29 21:58:34.014887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:15.040 [2024-09-29 21:58:34.014894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:15.040 [2024-09-29 21:58:34.014907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.040 [2024-09-29 21:58:34.014914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:15.040 [2024-09-29 21:58:34.014920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:15.040 [2024-09-29 21:58:34.014927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.040 [2024-09-29 21:58:34.014934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:15.040 [2024-09-29 21:58:34.014941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:15.040 [2024-09-29 21:58:34.014947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.040 [2024-09-29 21:58:34.014954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:15.040 [2024-09-29 21:58:34.014960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:15.040 [2024-09-29 21:58:34.014967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.040 [2024-09-29 21:58:34.014974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:15.040 [2024-09-29 21:58:34.014980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:15.040 [2024-09-29 21:58:34.014987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.040 [2024-09-29 21:58:34.014994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:15.040 [2024-09-29 21:58:34.015001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:15.040 [2024-09-29 21:58:34.015007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.040 [2024-09-29 21:58:34.015014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:15.040 [2024-09-29 21:58:34.015021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:15.041 [2024-09-29 21:58:34.015028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:15.041 [2024-09-29 21:58:34.015034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:15.041 [2024-09-29 21:58:34.015041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:15.041 [2024-09-29 21:58:34.015048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:15.041 [2024-09-29 21:58:34.015054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:15.041 [2024-09-29 21:58:34.015060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:15.041 [2024-09-29 21:58:34.015068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.041 [2024-09-29 21:58:34.015075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:15.041 [2024-09-29 21:58:34.015082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:15.041 [2024-09-29 21:58:34.015090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.041 [2024-09-29 21:58:34.015096] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:15.041 [2024-09-29 21:58:34.015105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:15.041 [2024-09-29 21:58:34.015113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:15.041 [2024-09-29 21:58:34.015121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.041 [2024-09-29 21:58:34.015129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:15.041 [2024-09-29 21:58:34.015136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:15.041 [2024-09-29 21:58:34.015144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:15.041 [2024-09-29 21:58:34.015151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:15.041 [2024-09-29 21:58:34.015158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:15.041 [2024-09-29 21:58:34.015165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:15.041 [2024-09-29 21:58:34.015173] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:15.041 [2024-09-29 21:58:34.015183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.041 [2024-09-29 21:58:34.015191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:15.041 [2024-09-29 21:58:34.015199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:15.041 [2024-09-29 21:58:34.015206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:15.041 [2024-09-29 21:58:34.015214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:15.041 [2024-09-29 21:58:34.015221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:15.041 [2024-09-29 21:58:34.015229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:15.041 [2024-09-29 21:58:34.015236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:15.041 [2024-09-29 21:58:34.015243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:15.041 [2024-09-29 21:58:34.015250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:15.041 [2024-09-29 21:58:34.015258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:15.041 [2024-09-29 21:58:34.015265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:15.041 [2024-09-29 21:58:34.015272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:15.041 [2024-09-29 21:58:34.015280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:15.041 [2024-09-29 21:58:34.015288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:15.041 [2024-09-29 21:58:34.015295] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:15.041 [2024-09-29 21:58:34.015303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.041 [2024-09-29 21:58:34.015313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:15.041 [2024-09-29 21:58:34.015321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:15.041 [2024-09-29 21:58:34.015328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:15.041 [2024-09-29 21:58:34.015335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:15.041 [2024-09-29 21:58:34.015343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.041 [2024-09-29 21:58:34.015352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:15.041 [2024-09-29 21:58:34.015362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:27:15.041 [2024-09-29 21:58:34.015370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.302 [2024-09-29 21:58:34.052360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.302 [2024-09-29 21:58:34.052596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:15.302 [2024-09-29 21:58:34.052823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.932 ms 00:27:15.302 [2024-09-29 21:58:34.052922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.302 [2024-09-29 21:58:34.053459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.302 [2024-09-29 21:58:34.053591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:15.302 [2024-09-29 21:58:34.053699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:15.302 [2024-09-29 21:58:34.053738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.302 [2024-09-29 21:58:34.088639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.302 [2024-09-29 21:58:34.088809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:15.302 [2024-09-29 21:58:34.088887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.749 ms 00:27:15.302 [2024-09-29 21:58:34.088923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.302 [2024-09-29 21:58:34.089096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.302 [2024-09-29 21:58:34.089183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:15.302 [2024-09-29 21:58:34.089249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:15.302 [2024-09-29 21:58:34.089336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.302 [2024-09-29 21:58:34.089521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.302 [2024-09-29 21:58:34.089588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:15.302 [2024-09-29 21:58:34.089660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:27:15.303 [2024-09-29 21:58:34.089719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.089932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.089988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:15.303 [2024-09-29 21:58:34.090105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:27:15.303 [2024-09-29 21:58:34.090182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.104737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.104905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:15.303 [2024-09-29 21:58:34.104979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.482 ms 00:27:15.303 [2024-09-29 21:58:34.105008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.105306] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:15.303 [2024-09-29 21:58:34.105437] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:15.303 [2024-09-29 21:58:34.105458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.105472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:15.303 [2024-09-29 21:58:34.105487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:27:15.303 [2024-09-29 21:58:34.105498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.117902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.117945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:15.303 [2024-09-29 21:58:34.117956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.380 ms 00:27:15.303 [2024-09-29 21:58:34.117969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.118095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.118104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:15.303 [2024-09-29 21:58:34.118114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:27:15.303 [2024-09-29 21:58:34.118121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.118171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.118180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:15.303 [2024-09-29 21:58:34.118189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:15.303 [2024-09-29 21:58:34.118197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.118843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.118872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:15.303 [2024-09-29 21:58:34.118882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:27:15.303 [2024-09-29 21:58:34.118890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.118908] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:15.303 [2024-09-29 21:58:34.118918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.118926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:15.303 [2024-09-29 21:58:34.118934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:15.303 [2024-09-29 21:58:34.118948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.131459] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:15.303 [2024-09-29 21:58:34.131616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.131632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:15.303 [2024-09-29 21:58:34.131642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.644 ms 00:27:15.303 [2024-09-29 21:58:34.131651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.133954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.133990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:15.303 [2024-09-29 21:58:34.133999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:27:15.303 [2024-09-29 21:58:34.134007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.134091] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:27:15.303 [2024-09-29 21:58:34.134593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.134610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:15.303 [2024-09-29 21:58:34.134620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:27:15.303 [2024-09-29 21:58:34.134628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.134656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.134664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:15.303 [2024-09-29 21:58:34.134672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:15.303 [2024-09-29 21:58:34.134679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.134713] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:15.303 [2024-09-29 21:58:34.134724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.134734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:15.303 [2024-09-29 21:58:34.134742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:15.303 [2024-09-29 21:58:34.134749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.161495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.161542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:15.303 [2024-09-29 21:58:34.161555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.728 ms 00:27:15.303 [2024-09-29 21:58:34.161564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.161657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.303 [2024-09-29 21:58:34.161668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:15.303 [2024-09-29 21:58:34.161677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:15.303 [2024-09-29 21:58:34.161684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.303 [2024-09-29 21:58:34.162886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.682 ms, result 0 00:28:23.328  Copying: 24/1024 [MB] (24 MBps) Copying: 37/1024 [MB] (12 MBps) Copying: 48/1024 [MB] (10 MBps) Copying: 66/1024 [MB] (17 MBps) Copying: 78/1024 [MB] (12 MBps) Copying: 90/1024 [MB] (11 MBps) Copying: 101/1024 [MB] (10 MBps) Copying: 111/1024 [MB] (10 MBps) Copying: 122/1024 [MB] (10 MBps) Copying: 133/1024 [MB] (10 MBps) Copying: 150/1024 [MB] (17 MBps) Copying: 171/1024 [MB] (20 MBps) Copying: 183/1024 [MB] (12 MBps) Copying: 194/1024 [MB] (10 MBps) Copying: 207/1024 [MB] (13 MBps) Copying: 223/1024 [MB] (16 MBps) Copying: 244/1024 [MB] (20 MBps) Copying: 258/1024 [MB] (14 MBps) Copying: 280/1024 [MB] (21 MBps) Copying: 296/1024 [MB] (16 MBps) Copying: 308/1024 [MB] (11 MBps) Copying: 326/1024 [MB] (18 MBps) Copying: 346/1024 [MB] (19 MBps) Copying: 366/1024 [MB] (20 MBps) Copying: 387/1024 [MB] (21 MBps) Copying: 409/1024 [MB] (21 MBps) Copying: 427/1024 [MB] (18 MBps) Copying: 442/1024 [MB] (15 MBps) Copying: 458/1024 [MB] (15 MBps) Copying: 473/1024 [MB] (15 MBps) Copying: 484/1024 [MB] (10 MBps) Copying: 499/1024 [MB] (14 MBps) Copying: 513/1024 [MB] (14 MBps) Copying: 524/1024 [MB] (10 MBps) Copying: 535/1024 [MB] (10 MBps) Copying: 546/1024 [MB] (11 MBps) Copying: 560/1024 [MB] (13 MBps) Copying: 572/1024 [MB] (12 MBps) Copying: 591/1024 [MB] (18 MBps) Copying: 605/1024 [MB] (14 MBps) Copying: 617/1024 [MB] (11 MBps) Copying: 628/1024 [MB] (10 MBps) Copying: 640/1024 [MB] (12 MBps) Copying: 651/1024 [MB] (10 MBps) Copying: 665/1024 [MB] (14 MBps) Copying: 676/1024 [MB] (11 MBps) Copying: 691/1024 [MB] (14 MBps) Copying: 703/1024 [MB] (12 MBps) Copying: 722/1024 [MB] (18 MBps) Copying: 736/1024 [MB] (13 MBps) Copying: 748/1024 [MB] (11 MBps) Copying: 760/1024 [MB] (12 MBps) Copying: 771/1024 [MB] (10 MBps) Copying: 788/1024 [MB] (17 MBps) Copying: 799/1024 [MB] (10 MBps) Copying: 811/1024 [MB] (12 MBps) Copying: 831/1024 [MB] (20 MBps) Copying: 852/1024 [MB] (20 MBps) Copying: 864/1024 [MB] (12 MBps) Copying: 882/1024 [MB] (18 MBps) Copying: 903/1024 [MB] (20 MBps) Copying: 925/1024 [MB] (21 MBps) Copying: 945/1024 [MB] (20 MBps) Copying: 963/1024 [MB] (17 MBps) Copying: 980/1024 [MB] (16 MBps) Copying: 1003/1024 [MB] (22 MBps) Copying: 1015/1024 [MB] (12 MBps) Copying: 1024/1024 [MB] (average 15 MBps)[2024-09-29 21:59:42.151316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.328 [2024-09-29 21:59:42.151423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:23.328 [2024-09-29 21:59:42.151445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:23.328 [2024-09-29 21:59:42.151455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.328 [2024-09-29 21:59:42.151483] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:23.328 [2024-09-29 21:59:42.154950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.328 [2024-09-29 21:59:42.155000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:23.328 [2024-09-29 21:59:42.155015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.443 ms 00:28:23.328 [2024-09-29 21:59:42.155024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.328 [2024-09-29 21:59:42.155288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.328 [2024-09-29 21:59:42.155303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:23.328 [2024-09-29 21:59:42.155314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:28:23.328 [2024-09-29 21:59:42.155323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.328 [2024-09-29 21:59:42.155358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.328 [2024-09-29 21:59:42.155376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:23.328 [2024-09-29 21:59:42.155405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:23.328 [2024-09-29 21:59:42.155414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.328 [2024-09-29 21:59:42.155487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.328 [2024-09-29 21:59:42.155500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:23.328 [2024-09-29 21:59:42.155511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:23.328 [2024-09-29 21:59:42.155521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.328 [2024-09-29 21:59:42.155537] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:23.328 [2024-09-29 21:59:42.155552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:23.328 [2024-09-29 21:59:42.155565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:23.328 [2024-09-29 21:59:42.155743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.155996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:23.329 [2024-09-29 21:59:42.156430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:23.330 [2024-09-29 21:59:42.156440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:23.330 [2024-09-29 21:59:42.156453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:23.330 [2024-09-29 21:59:42.156464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:23.330 [2024-09-29 21:59:42.156479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:23.330 [2024-09-29 21:59:42.156498] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:23.330 [2024-09-29 21:59:42.156604] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6a175fc5-6fdf-49f1-b3d6-57d4ea920dff 00:28:23.330 [2024-09-29 21:59:42.156617] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:23.330 [2024-09-29 21:59:42.156627] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2336 00:28:23.330 [2024-09-29 21:59:42.156634] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2304 00:28:23.330 [2024-09-29 21:59:42.156644] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0139 00:28:23.330 [2024-09-29 21:59:42.156653] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:23.330 [2024-09-29 21:59:42.156664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:23.330 [2024-09-29 21:59:42.156673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:23.330 [2024-09-29 21:59:42.156680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:23.330 [2024-09-29 21:59:42.156687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:23.330 [2024-09-29 21:59:42.156695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.330 [2024-09-29 21:59:42.156706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:23.330 [2024-09-29 21:59:42.156715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:28:23.330 [2024-09-29 21:59:42.156724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.330 [2024-09-29 21:59:42.172905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.330 [2024-09-29 21:59:42.172957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:23.330 [2024-09-29 21:59:42.172972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.157 ms 00:28:23.330 [2024-09-29 21:59:42.172980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.330 [2024-09-29 21:59:42.173439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.330 [2024-09-29 21:59:42.173453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:23.330 [2024-09-29 21:59:42.173472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:28:23.330 [2024-09-29 21:59:42.173482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.330 [2024-09-29 21:59:42.207611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.330 [2024-09-29 21:59:42.207865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:23.330 [2024-09-29 21:59:42.207887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.330 [2024-09-29 21:59:42.207899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.330 [2024-09-29 21:59:42.207985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.330 [2024-09-29 21:59:42.207996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:23.330 [2024-09-29 21:59:42.208012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.330 [2024-09-29 21:59:42.208022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.330 [2024-09-29 21:59:42.208086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.330 [2024-09-29 21:59:42.208100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:23.330 [2024-09-29 21:59:42.208109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.330 [2024-09-29 21:59:42.208118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.330 [2024-09-29 21:59:42.208138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.330 [2024-09-29 21:59:42.208147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:23.330 [2024-09-29 21:59:42.208155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.330 [2024-09-29 21:59:42.208167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.624 [2024-09-29 21:59:42.299707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.624 [2024-09-29 21:59:42.299785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:23.624 [2024-09-29 21:59:42.299800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.624 [2024-09-29 21:59:42.299809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.624 [2024-09-29 21:59:42.374087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.624 [2024-09-29 21:59:42.374421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:23.624 [2024-09-29 21:59:42.374453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.624 [2024-09-29 21:59:42.374462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.624 [2024-09-29 21:59:42.374574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.624 [2024-09-29 21:59:42.374586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:23.624 [2024-09-29 21:59:42.374598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.624 [2024-09-29 21:59:42.374607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.624 [2024-09-29 21:59:42.374653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.624 [2024-09-29 21:59:42.374664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:23.624 [2024-09-29 21:59:42.374674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.624 [2024-09-29 21:59:42.374682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.624 [2024-09-29 21:59:42.374788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.624 [2024-09-29 21:59:42.374802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:23.624 [2024-09-29 21:59:42.374813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.624 [2024-09-29 21:59:42.374821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.624 [2024-09-29 21:59:42.374850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.624 [2024-09-29 21:59:42.374860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:23.624 [2024-09-29 21:59:42.374869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.624 [2024-09-29 21:59:42.374881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.624 [2024-09-29 21:59:42.374937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.624 [2024-09-29 21:59:42.374950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:23.624 [2024-09-29 21:59:42.374959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.624 [2024-09-29 21:59:42.374968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.624 [2024-09-29 21:59:42.375024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:23.624 [2024-09-29 21:59:42.375037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:23.624 [2024-09-29 21:59:42.375047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:23.624 [2024-09-29 21:59:42.375058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.624 [2024-09-29 21:59:42.375220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 223.866 ms, result 0 00:28:24.579 00:28:24.579 00:28:24.579 21:59:43 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:27.128 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 78638 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78638 ']' 00:28:27.128 Process with pid 78638 is not found 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78638 00:28:27.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78638) - No such process 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 78638 is not found' 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:28:27.128 Remove shared memory files 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_band_md /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_l2p_l1 /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_l2p_l2 /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_l2p_l2_ctx /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_nvc_md /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_p2l_pool /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_sb /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_sb_shm /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_trim_bitmap /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_trim_log /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_trim_md /dev/hugepages/ftl_6a175fc5-6fdf-49f1-b3d6-57d4ea920dff_vmap 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:28:27.128 ************************************ 00:28:27.128 END TEST ftl_restore_fast 00:28:27.128 ************************************ 00:28:27.128 00:28:27.128 real 4m37.881s 00:28:27.128 user 4m26.245s 00:28:27.128 sys 0m11.362s 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:27.128 21:59:45 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:27.128 Process with pid 72719 is not found 00:28:27.128 21:59:45 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:27.128 21:59:45 ftl -- ftl/ftl.sh@14 -- # killprocess 72719 00:28:27.128 21:59:45 ftl -- common/autotest_common.sh@950 -- # '[' -z 72719 ']' 00:28:27.128 21:59:45 ftl -- common/autotest_common.sh@954 -- # kill -0 72719 00:28:27.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72719) - No such process 00:28:27.128 21:59:45 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72719 is not found' 00:28:27.128 21:59:45 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:27.128 21:59:45 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81490 00:28:27.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.128 21:59:45 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81490 00:28:27.128 21:59:45 ftl -- common/autotest_common.sh@831 -- # '[' -z 81490 ']' 00:28:27.128 21:59:45 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.128 21:59:45 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:27.128 21:59:45 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.128 21:59:45 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:27.128 21:59:45 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:27.128 21:59:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:27.128 [2024-09-29 21:59:45.724620] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:27.128 [2024-09-29 21:59:45.724950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81490 ] 00:28:27.128 [2024-09-29 21:59:45.870670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.128 [2024-09-29 21:59:46.084696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.071 21:59:46 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:28.071 21:59:46 ftl -- common/autotest_common.sh@864 -- # return 0 00:28:28.071 21:59:46 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:28.071 nvme0n1 00:28:28.071 21:59:47 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:28.071 21:59:47 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:28.071 21:59:47 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:28.332 21:59:47 ftl -- ftl/common.sh@28 -- # stores=e42eb667-b3f3-4a89-a50a-6c73e49bb37c 00:28:28.332 21:59:47 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:28.332 21:59:47 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e42eb667-b3f3-4a89-a50a-6c73e49bb37c 00:28:28.593 21:59:47 ftl -- ftl/ftl.sh@23 -- # killprocess 81490 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@950 -- # '[' -z 81490 ']' 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@954 -- # kill -0 81490 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@955 -- # uname 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81490 00:28:28.593 killing process with pid 81490 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81490' 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@969 -- # kill 81490 00:28:28.593 21:59:47 ftl -- common/autotest_common.sh@974 -- # wait 81490 00:28:30.500 21:59:48 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:30.500 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:30.500 Waiting for block devices as requested 00:28:30.500 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:30.500 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:30.500 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:30.500 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:35.788 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:35.788 21:59:54 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:35.788 21:59:54 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:35.788 Remove shared memory files 00:28:35.788 21:59:54 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:35.788 21:59:54 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:35.788 21:59:54 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:35.788 21:59:54 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:35.788 21:59:54 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:35.788 00:28:35.788 real 12m55.269s 00:28:35.788 user 14m56.708s 00:28:35.788 sys 1m18.418s 00:28:35.788 21:59:54 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:35.788 ************************************ 00:28:35.788 END TEST ftl 00:28:35.788 ************************************ 00:28:35.788 21:59:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:35.788 21:59:54 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:35.788 21:59:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:35.788 21:59:54 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:35.788 21:59:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:35.788 21:59:54 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:28:35.788 21:59:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:35.788 21:59:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:35.788 21:59:54 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:28:35.788 21:59:54 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:28:35.788 21:59:54 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:28:35.788 21:59:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:35.788 21:59:54 -- common/autotest_common.sh@10 -- # set +x 00:28:35.788 21:59:54 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:28:35.788 21:59:54 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:35.788 21:59:54 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:35.788 21:59:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.170 INFO: APP EXITING 00:28:37.170 INFO: killing all VMs 00:28:37.170 INFO: killing vhost app 00:28:37.170 INFO: EXIT DONE 00:28:37.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:37.686 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:37.686 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:37.944 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:37.944 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:38.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:38.464 Cleaning 00:28:38.464 Removing: /var/run/dpdk/spdk0/config 00:28:38.464 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:38.464 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:38.464 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:38.464 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:38.464 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:38.464 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:38.464 Removing: /var/run/dpdk/spdk0 00:28:38.464 Removing: /var/run/dpdk/spdk_pid57296 00:28:38.464 Removing: /var/run/dpdk/spdk_pid57492 00:28:38.464 Removing: /var/run/dpdk/spdk_pid57710 00:28:38.464 Removing: /var/run/dpdk/spdk_pid57803 00:28:38.464 Removing: /var/run/dpdk/spdk_pid57843 00:28:38.464 Removing: /var/run/dpdk/spdk_pid57965 00:28:38.464 Removing: /var/run/dpdk/spdk_pid57983 00:28:38.464 Removing: /var/run/dpdk/spdk_pid58177 00:28:38.464 Removing: /var/run/dpdk/spdk_pid58263 00:28:38.464 Removing: /var/run/dpdk/spdk_pid58354 00:28:38.724 Removing: /var/run/dpdk/spdk_pid58459 00:28:38.724 Removing: /var/run/dpdk/spdk_pid58551 00:28:38.724 Removing: /var/run/dpdk/spdk_pid58596 00:28:38.724 Removing: /var/run/dpdk/spdk_pid58627 00:28:38.724 Removing: /var/run/dpdk/spdk_pid58703 00:28:38.724 Removing: /var/run/dpdk/spdk_pid58792 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59223 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59287 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59339 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59355 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59457 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59473 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59575 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59591 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59650 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59668 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59721 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59739 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59899 00:28:38.724 Removing: /var/run/dpdk/spdk_pid59937 00:28:38.724 Removing: /var/run/dpdk/spdk_pid60024 00:28:38.724 Removing: /var/run/dpdk/spdk_pid60197 00:28:38.724 Removing: /var/run/dpdk/spdk_pid60280 00:28:38.724 Removing: /var/run/dpdk/spdk_pid60317 00:28:38.724 Removing: /var/run/dpdk/spdk_pid60739 00:28:38.724 Removing: /var/run/dpdk/spdk_pid60837 00:28:38.724 Removing: /var/run/dpdk/spdk_pid60959 00:28:38.724 Removing: /var/run/dpdk/spdk_pid61012 00:28:38.724 Removing: /var/run/dpdk/spdk_pid61043 00:28:38.724 Removing: /var/run/dpdk/spdk_pid61127 00:28:38.724 Removing: /var/run/dpdk/spdk_pid61745 00:28:38.724 Removing: /var/run/dpdk/spdk_pid61786 00:28:38.724 Removing: /var/run/dpdk/spdk_pid62258 00:28:38.724 Removing: /var/run/dpdk/spdk_pid62352 00:28:38.724 Removing: /var/run/dpdk/spdk_pid62472 00:28:38.724 Removing: /var/run/dpdk/spdk_pid62528 00:28:38.724 Removing: /var/run/dpdk/spdk_pid62559 00:28:38.724 Removing: /var/run/dpdk/spdk_pid62585 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64426 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64563 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64567 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64579 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64626 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64630 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64642 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64687 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64691 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64703 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64748 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64752 00:28:38.724 Removing: /var/run/dpdk/spdk_pid64764 00:28:38.724 Removing: /var/run/dpdk/spdk_pid66125 00:28:38.724 Removing: /var/run/dpdk/spdk_pid66222 00:28:38.724 Removing: /var/run/dpdk/spdk_pid67623 00:28:38.724 Removing: /var/run/dpdk/spdk_pid68992 00:28:38.724 Removing: /var/run/dpdk/spdk_pid69090 00:28:38.724 Removing: /var/run/dpdk/spdk_pid69173 00:28:38.724 Removing: /var/run/dpdk/spdk_pid69251 00:28:38.724 Removing: /var/run/dpdk/spdk_pid69356 00:28:38.724 Removing: /var/run/dpdk/spdk_pid69431 00:28:38.724 Removing: /var/run/dpdk/spdk_pid69573 00:28:38.724 Removing: /var/run/dpdk/spdk_pid69937 00:28:38.724 Removing: /var/run/dpdk/spdk_pid69972 00:28:38.724 Removing: /var/run/dpdk/spdk_pid70431 00:28:38.724 Removing: /var/run/dpdk/spdk_pid70612 00:28:38.724 Removing: /var/run/dpdk/spdk_pid70711 00:28:38.724 Removing: /var/run/dpdk/spdk_pid70833 00:28:38.724 Removing: /var/run/dpdk/spdk_pid70892 00:28:38.724 Removing: /var/run/dpdk/spdk_pid70923 00:28:38.724 Removing: /var/run/dpdk/spdk_pid71233 00:28:38.724 Removing: /var/run/dpdk/spdk_pid71294 00:28:38.724 Removing: /var/run/dpdk/spdk_pid71371 00:28:38.724 Removing: /var/run/dpdk/spdk_pid71764 00:28:38.724 Removing: /var/run/dpdk/spdk_pid71909 00:28:38.724 Removing: /var/run/dpdk/spdk_pid72719 00:28:38.724 Removing: /var/run/dpdk/spdk_pid72850 00:28:38.724 Removing: /var/run/dpdk/spdk_pid73017 00:28:38.724 Removing: /var/run/dpdk/spdk_pid73125 00:28:38.724 Removing: /var/run/dpdk/spdk_pid73434 00:28:38.724 Removing: /var/run/dpdk/spdk_pid73699 00:28:38.724 Removing: /var/run/dpdk/spdk_pid74039 00:28:38.724 Removing: /var/run/dpdk/spdk_pid74223 00:28:38.724 Removing: /var/run/dpdk/spdk_pid74320 00:28:38.724 Removing: /var/run/dpdk/spdk_pid74367 00:28:38.724 Removing: /var/run/dpdk/spdk_pid74466 00:28:38.724 Removing: /var/run/dpdk/spdk_pid74491 00:28:38.724 Removing: /var/run/dpdk/spdk_pid74544 00:28:38.724 Removing: /var/run/dpdk/spdk_pid74712 00:28:38.724 Removing: /var/run/dpdk/spdk_pid74930 00:28:38.724 Removing: /var/run/dpdk/spdk_pid75188 00:28:38.724 Removing: /var/run/dpdk/spdk_pid75470 00:28:38.724 Removing: /var/run/dpdk/spdk_pid75728 00:28:38.724 Removing: /var/run/dpdk/spdk_pid76077 00:28:38.724 Removing: /var/run/dpdk/spdk_pid76202 00:28:38.724 Removing: /var/run/dpdk/spdk_pid76289 00:28:38.724 Removing: /var/run/dpdk/spdk_pid76663 00:28:38.724 Removing: /var/run/dpdk/spdk_pid76721 00:28:38.724 Removing: /var/run/dpdk/spdk_pid77018 00:28:38.724 Removing: /var/run/dpdk/spdk_pid77301 00:28:38.724 Removing: /var/run/dpdk/spdk_pid77658 00:28:38.724 Removing: /var/run/dpdk/spdk_pid77769 00:28:38.724 Removing: /var/run/dpdk/spdk_pid77805 00:28:38.985 Removing: /var/run/dpdk/spdk_pid77862 00:28:38.985 Removing: /var/run/dpdk/spdk_pid77919 00:28:38.985 Removing: /var/run/dpdk/spdk_pid77972 00:28:38.985 Removing: /var/run/dpdk/spdk_pid78162 00:28:38.985 Removing: /var/run/dpdk/spdk_pid78255 00:28:38.985 Removing: /var/run/dpdk/spdk_pid78324 00:28:38.985 Removing: /var/run/dpdk/spdk_pid78391 00:28:38.985 Removing: /var/run/dpdk/spdk_pid78424 00:28:38.985 Removing: /var/run/dpdk/spdk_pid78491 00:28:38.985 Removing: /var/run/dpdk/spdk_pid78638 00:28:38.985 Removing: /var/run/dpdk/spdk_pid78867 00:28:38.985 Removing: /var/run/dpdk/spdk_pid79463 00:28:38.985 Removing: /var/run/dpdk/spdk_pid80142 00:28:38.985 Removing: /var/run/dpdk/spdk_pid80730 00:28:38.985 Removing: /var/run/dpdk/spdk_pid81490 00:28:38.985 Clean 00:28:38.985 21:59:57 -- common/autotest_common.sh@1451 -- # return 0 00:28:38.985 21:59:57 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:38.985 21:59:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:38.985 21:59:57 -- common/autotest_common.sh@10 -- # set +x 00:28:38.985 21:59:57 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:38.985 21:59:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:38.985 21:59:57 -- common/autotest_common.sh@10 -- # set +x 00:28:38.985 21:59:57 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:38.985 21:59:57 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:38.985 21:59:57 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:38.985 21:59:57 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:38.985 21:59:57 -- spdk/autotest.sh@394 -- # hostname 00:28:38.985 21:59:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:39.245 geninfo: WARNING: invalid characters removed from testname! 00:29:05.837 22:00:23 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:07.792 22:00:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:10.337 22:00:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:12.892 22:00:31 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:15.440 22:00:34 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:18.750 22:00:37 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:21.298 22:00:39 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:21.298 22:00:39 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:29:21.298 22:00:39 -- common/autotest_common.sh@1681 -- $ lcov --version 00:29:21.298 22:00:39 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:29:21.298 22:00:39 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:29:21.298 22:00:39 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:29:21.298 22:00:39 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:29:21.298 22:00:39 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:29:21.298 22:00:39 -- scripts/common.sh@336 -- $ IFS=.-: 00:29:21.298 22:00:39 -- scripts/common.sh@336 -- $ read -ra ver1 00:29:21.298 22:00:39 -- scripts/common.sh@337 -- $ IFS=.-: 00:29:21.298 22:00:39 -- scripts/common.sh@337 -- $ read -ra ver2 00:29:21.298 22:00:39 -- scripts/common.sh@338 -- $ local 'op=<' 00:29:21.298 22:00:39 -- scripts/common.sh@340 -- $ ver1_l=2 00:29:21.298 22:00:39 -- scripts/common.sh@341 -- $ ver2_l=1 00:29:21.298 22:00:39 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:29:21.298 22:00:39 -- scripts/common.sh@344 -- $ case "$op" in 00:29:21.298 22:00:39 -- scripts/common.sh@345 -- $ : 1 00:29:21.299 22:00:39 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:29:21.299 22:00:39 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.299 22:00:39 -- scripts/common.sh@365 -- $ decimal 1 00:29:21.299 22:00:39 -- scripts/common.sh@353 -- $ local d=1 00:29:21.299 22:00:39 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:29:21.299 22:00:39 -- scripts/common.sh@355 -- $ echo 1 00:29:21.299 22:00:39 -- scripts/common.sh@365 -- $ ver1[v]=1 00:29:21.299 22:00:39 -- scripts/common.sh@366 -- $ decimal 2 00:29:21.299 22:00:39 -- scripts/common.sh@353 -- $ local d=2 00:29:21.299 22:00:39 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:29:21.299 22:00:39 -- scripts/common.sh@355 -- $ echo 2 00:29:21.299 22:00:39 -- scripts/common.sh@366 -- $ ver2[v]=2 00:29:21.299 22:00:39 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:29:21.299 22:00:39 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:29:21.299 22:00:39 -- scripts/common.sh@368 -- $ return 0 00:29:21.299 22:00:39 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.299 22:00:39 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:29:21.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.299 --rc genhtml_branch_coverage=1 00:29:21.299 --rc genhtml_function_coverage=1 00:29:21.299 --rc genhtml_legend=1 00:29:21.299 --rc geninfo_all_blocks=1 00:29:21.299 --rc geninfo_unexecuted_blocks=1 00:29:21.299 00:29:21.299 ' 00:29:21.299 22:00:39 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:29:21.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.299 --rc genhtml_branch_coverage=1 00:29:21.299 --rc genhtml_function_coverage=1 00:29:21.299 --rc genhtml_legend=1 00:29:21.299 --rc geninfo_all_blocks=1 00:29:21.299 --rc geninfo_unexecuted_blocks=1 00:29:21.299 00:29:21.299 ' 00:29:21.299 22:00:39 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:29:21.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.299 --rc genhtml_branch_coverage=1 00:29:21.299 --rc genhtml_function_coverage=1 00:29:21.299 --rc genhtml_legend=1 00:29:21.299 --rc geninfo_all_blocks=1 00:29:21.299 --rc geninfo_unexecuted_blocks=1 00:29:21.299 00:29:21.299 ' 00:29:21.299 22:00:39 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:29:21.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.299 --rc genhtml_branch_coverage=1 00:29:21.299 --rc genhtml_function_coverage=1 00:29:21.299 --rc genhtml_legend=1 00:29:21.299 --rc geninfo_all_blocks=1 00:29:21.299 --rc geninfo_unexecuted_blocks=1 00:29:21.299 00:29:21.299 ' 00:29:21.299 22:00:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:21.299 22:00:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:29:21.299 22:00:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:21.299 22:00:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.299 22:00:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.299 22:00:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.299 22:00:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.299 22:00:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.299 22:00:39 -- paths/export.sh@5 -- $ export PATH 00:29:21.299 22:00:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.299 22:00:39 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:21.299 22:00:39 -- common/autobuild_common.sh@479 -- $ date +%s 00:29:21.299 22:00:39 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727647239.XXXXXX 00:29:21.299 22:00:39 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727647239.7jxsoB 00:29:21.299 22:00:39 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:29:21.299 22:00:39 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:29:21.299 22:00:39 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:21.299 22:00:39 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:21.299 22:00:39 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:21.299 22:00:39 -- common/autobuild_common.sh@495 -- $ get_config_params 00:29:21.299 22:00:39 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:29:21.299 22:00:39 -- common/autotest_common.sh@10 -- $ set +x 00:29:21.299 22:00:39 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:21.299 22:00:39 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:29:21.299 22:00:39 -- pm/common@17 -- $ local monitor 00:29:21.299 22:00:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:21.299 22:00:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:21.299 22:00:39 -- pm/common@25 -- $ sleep 1 00:29:21.299 22:00:39 -- pm/common@21 -- $ date +%s 00:29:21.299 22:00:39 -- pm/common@21 -- $ date +%s 00:29:21.299 22:00:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727647240 00:29:21.299 22:00:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727647240 00:29:21.299 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727647240_collect-cpu-load.pm.log 00:29:21.299 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727647240_collect-vmstat.pm.log 00:29:22.240 22:00:41 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:29:22.240 22:00:41 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:29:22.240 22:00:41 -- spdk/autopackage.sh@14 -- $ timing_finish 00:29:22.240 22:00:41 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:22.240 22:00:41 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:22.240 22:00:41 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:22.240 22:00:41 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:22.240 22:00:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:22.240 22:00:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:22.240 22:00:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.240 22:00:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:22.240 22:00:41 -- pm/common@44 -- $ pid=83176 00:29:22.240 22:00:41 -- pm/common@50 -- $ kill -TERM 83176 00:29:22.240 22:00:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.240 22:00:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:22.240 22:00:41 -- pm/common@44 -- $ pid=83177 00:29:22.240 22:00:41 -- pm/common@50 -- $ kill -TERM 83177 00:29:22.240 + [[ -n 5023 ]] 00:29:22.240 + sudo kill 5023 00:29:22.248 [Pipeline] } 00:29:22.262 [Pipeline] // timeout 00:29:22.265 [Pipeline] } 00:29:22.273 [Pipeline] // stage 00:29:22.276 [Pipeline] } 00:29:22.284 [Pipeline] // catchError 00:29:22.289 [Pipeline] stage 00:29:22.291 [Pipeline] { (Stop VM) 00:29:22.298 [Pipeline] sh 00:29:22.578 + vagrant halt 00:29:25.163 ==> default: Halting domain... 00:29:30.464 [Pipeline] sh 00:29:30.743 + vagrant destroy -f 00:29:33.284 ==> default: Removing domain... 00:29:33.555 [Pipeline] sh 00:29:33.834 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:33.846 [Pipeline] } 00:29:33.860 [Pipeline] // stage 00:29:33.866 [Pipeline] } 00:29:33.880 [Pipeline] // dir 00:29:33.886 [Pipeline] } 00:29:33.903 [Pipeline] // wrap 00:29:33.921 [Pipeline] } 00:29:33.970 [Pipeline] // catchError 00:29:33.983 [Pipeline] stage 00:29:33.984 [Pipeline] { (Epilogue) 00:29:33.993 [Pipeline] sh 00:29:34.274 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:39.560 [Pipeline] catchError 00:29:39.562 [Pipeline] { 00:29:39.577 [Pipeline] sh 00:29:39.867 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:39.867 Artifacts sizes are good 00:29:39.877 [Pipeline] } 00:29:39.893 [Pipeline] // catchError 00:29:39.906 [Pipeline] archiveArtifacts 00:29:39.914 Archiving artifacts 00:29:40.031 [Pipeline] cleanWs 00:29:40.043 [WS-CLEANUP] Deleting project workspace... 00:29:40.043 [WS-CLEANUP] Deferred wipeout is used... 00:29:40.049 [WS-CLEANUP] done 00:29:40.051 [Pipeline] } 00:29:40.067 [Pipeline] // stage 00:29:40.073 [Pipeline] } 00:29:40.088 [Pipeline] // node 00:29:40.094 [Pipeline] End of Pipeline 00:29:40.196 Finished: SUCCESS